I'm using aws copilot but i think i can generalize this question to apprunner. trying to get envvars set from Parameter Store but having no luck. the left is my aws copilot manifest yml, i saw examples of setting things this way. it results in an apprunner configed on the right. in production it seems these are interpreted as literals and not as parameter store values
any idea on how to properly connect apprunner to parameter store?
Unfortunately currently App Runner doesn't provide an intrinsic way for integrating with SSM parameter store like what ECS does. As a result, Copilot doesn't support secret section for Request-driven service as well (refer to Copilot doc here). As for environment variables, they are what you define in the manifest and will be injected as literals.
However, there is a workaround in Copilot allowing your app to use secrets stored in SSM parameter store. You can specify an addon template (e.g., policy.yaml) and put it in the copilot/${svc name}/addons/ local directory with the following template allowing the App Runner service to be able to retrieve from SSM parameter store:
Parameters:
App:
Type: String
Description: Your application's name.
Env:
Type: String
Description: The environment name your service, job, or workflow is being deployed to.
Name:
Type: String
Description: The name of the service, job, or workflow being deployed.
Resources:
MySSMPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: SSMActions
Effect: Allow
Action:
- "ssm:GetParameters"
Resource: !Sub 'arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/*'
Outputs:
MySSMPolicyArn:
Value: !Ref MySSMPolicy
After that, in your code by using AWS SDK you can call AWS SSM API to retrieve any secrets you defined before. Let me know if you have any more questions!
I am trying to deploy a serverless project which has s3 bucket creation cloudformation in the serverless.yml file, but the problem is when I tried to deploy, it says the s3 bucket already exists and failing the deployment.
I know s3 bucket name should be globally unique, and I am damn sure it is a unique name that I am using, even if changed to something else, it still says the same.
the cloudformation stack it says the s3 bucket exists is actually the newly created stack, not sure how to fix this issue. can anyone help me out with this issue and tell me how to fix the deployment issue and the cause for the issue :).
Thanks in advance.
The issue I had was, for one of the lambdas I had the above-mentioned bucket as the event source, so when some bucket is added as event source it actually creating that bucket as well, therefore when it runs the actual creation related cloudformation it is saying the bucket already exists.
So I fixed it by only keeping the event source and removed the actual declaration of that bucket.
If you add existing: true to the S3 config in your serverless.yml file it won't try to create the S3 bucket like the below:-
funcName:
handler: handler
events:
- s3:
bucket: 'my-bucket-name'
events: s3:ObjectCreated:*
existing: true
rules:
- suffix: .pdf
- prefix: documents
Anything involving CloudFormation (or any other infrastructure-in-code) is fussy, and the error messages can mislead, meaning there are a ton of things that can cause this problem (see issues on GitHub like this one).
But in my experience, the most common causes of these kind of problems are are not the pre-existing bucket, but problems with AWS credentials, permissions, or region that give misleading error messages. To fix these, or at least rule them out:
Make sure your serveless.yml is set to the region you already deployed the stack in. Example:
custom:
stage: dev
region: us-east-2
Override any latent credentials from, for example, ~/.aws/credentials, by explicitly setting your credentials in the shell you'll use to deploy. Example from the Serverless docs:
export AWS_ACCESS_KEY_ID=<your access key here>
export AWS_SECRET_ACCESS_KEY=<your access secret here.
Make sure those AWS credentials have the roles and permissions they need.
But, as I mentioned, CloudFormation is fussy. There may be other problems to solve, but try these first. You may try them and still be beating your head against the wall, but it'll more likely be the right wall. Hope this helps.
Try to use Conditional statements and pass them as a Parameter to create the bucket or not
AWSTemplateFormatVersion: 2010-09-09
Parameters:
EnvType:
Description: Environment type.
Default: test
Type: String
AllowedValues:
- prod
- test
ConstraintDescription: must specify prod or test.
Conditions:
CreateProdResources: !Equals
- !Ref EnvType
- prod
Resources:
EC2Instance:
Type: 'AWS::EC2::Instance'
Properties:
ImageId: ami-0ff8a91507f77f867
MountPoint:
Type: 'AWS::EC2::VolumeAttachment'
Condition: CreateProdResources
Properties:
InstanceId: !Ref EC2Instance
VolumeId: !Ref NewVolume
Device: /dev/sdh
NewVolume:
Type: 'AWS::EC2::Volume'
Condition: CreateProdResources
Properties:
Size: 100
AvailabilityZone: !GetAtt
- EC2Instance
- AvailabilityZone
Follow the sample condition flow to decide wheather to create a resource or not.
See this for more details
When deploying, the BucketName must be unique across all regions. So if anyone has already created a bucket with "local-bucket-dev," it will throw
An error occurred: AttachmentsBucket - local-bucket-dev already
exists.
Try to just the BucketName to be unique.
I hope that helps.
I would like to perform the following operations in order with CloudFormation.
Start up an EC2 instance.
Give it privileges to access the full internet using security group A.
Download particular versions of Java and Python
Remove its internet privileges by removing security group A and adding a security group B.
I observe that there is a DependsOn attribute for specifying the order in which to create resources, but I was unable to find a feature that would allow me to update the security groups on the same EC2 instance twice over the course of creating a stack.
Is this possible with CloudFormation?
Not in CloudFormation natively, but you could launch the EC2 instance with a configured userdata script that itself downloads Java/Python and the awscli, as necessary, and then uses the awscli to switch security groups for the current EC2 instance.
However, if all you need is Java and Python pre-loaded then why not simply create an AMI with them already installed and launch from that AMI?
The best way out is to utilise a Cloudformation custom resource here. You can create a lambda function that does exactly what you need. This lambda function can then be called as a custom resource function in the cloud formation template.
You can pass your new security group ID and instance ID to the lambda function and code the lambda function to use AWS SDK and do the modifications that you need.
I have leveraged it to post an update to my web server about the progress of the cloud formation template. Below is the sample code of the template.
EC2InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles: [!Ref 'EC2Role']
MarkInstanceProfileComplete:
Type: 'Custom::EC2InstanceProfileDone'
Version: '1.0'
DependsOn: EC2InstanceProfile
Properties:
ServiceToken: !Ref CustomResourceArn
HostURL: !Ref Host
LoginType: !Ref LoginType
SecretId: !Ref SecretId
WorkspaceId: !Ref WorkspaceId
Event: 2
Total: 3
Here the resource MarkInstanceProfileComplete is a custom resource that calls a Lambda function. It takes the event count and total count as input and processes them to calculate percentage progress. Based on that it sends out a request to my web server. For all we care, this Lambda function can do potentially anything you want it to do.
I have SAM template which deploys few lambdas and I would like to use some parameters I created in SSM parameters store.
I created 2 parameters for my tests:
/test/param which is a simple string
/test/param/encrypt which contains the same string as /test/param but is encrypted by a KMS key
In my SAM template, I'm trying to get the the value of /test/params by following this blog post. Here is a snipper of my template:
Parameters:
AuthPasswordPublic:
Type: AWS::SSM::Parameter::Value<String>
NoEcho: true
MinLength: 8
Description: Password for the "public" part of the website
Default: /test/param
...
Resources:
Auth:
Type: AWS::Serverless::Function
Properties:
Runtime: nodejs8.10
Handler: auth.handler
CodeUri: ./dist
Environment:
Variables:
PASSWORD_PUBLIC: !Ref AuthPasswordPublic
SEED: !Ref AuthSeed
Events:
GetResource:
Type: Api
Properties:
Path: /auth
Method: post
This should theoretically works when deployed onto AWS. However, I would like to test it locally first. I'm already aws-sam-local and my credentials are properly configured on my local machine as I'm able to use the AWS CLI. But when running this locally, the value of the envvar PASSWORD_PUBLIC is empty. I tested both the plain text en encrypted SSM parameters but the results are the same.
I would suspect that aws-sam-cli does not support SSM parameters yet but couldn't find any information about that online, nor on the GitHub issues/PR. Any ideas of what is going on here?
aws-sam-cli uses the docker-lambda container, which according to the docs creates:
A sandboxed local environment that replicates the live AWS Lambda environment almost identically...
This means that components such as AWS SSM are not re-created within the docker container. You can check the open Github issue here.
So you may have to resort to retrieving the SSM parameters from the host (with aws cli configured), and pass them into the container when invoking sam-cli:
PASSWORD_PUBLIC=$(aws ssm get-parameter --with-decryption --name "/test/param/encrypt") sam local start-api
When I run CloudFormation deploy using a template with API Gateway resources, the first time I run it, it creates and deploys to stages. The subsequent times I run it, it updates the resources but doesn't deploy to stages.
Is that behaviour as intended? If yes, how'd I get it to deploy to stages whenever it updates?
(Terraform mentions a similar issue: https://github.com/hashicorp/terraform/issues/6613)
Seems like there is no way to easily create a new Deployment whenever one of your Cloudformation Resources changes.
One way to work around that would be to use a Lambda-backed Custom Resource (see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html).
The Lambda should create the new Deployment, only if one of your Resources has been updated. To determine if one of your Resources has been updated,
you will probably have to implement custom logic around this API call: http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_DescribeStackEvents.html
In order to trigger updates on your Custom Resource, I suggest you supply a Cloudformation Parameter that will be used to force an update of your Custom Resource (e.g. the current time, or a version number).
Note that you will have to add a DependsOn clause to your Custom Resource that will include all Resources relevant to your API. Otherwise, your deployment might be created before all your API Resources are updated.
Hope this helps.
When your template specifies a deployment, CloudFormation will create that deployment only if it doesn't already exist. When you attempt to run it again, it observes that the deployment still exists so it won't recreate it, thus no deployment. You need a new resource id for the deployment so that it will create a new deployment. Read this for more information: https://currentlyunnamed-theclassic.blogspot.com/2018/12/mastering-cloudformation-for-api.html
CloudFormation in Amazon's words is:
AWS CloudFormation takes care of provisioning and configuring those resources for you
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html
Redeployment of APIs is not a provisioning task... It is a promotion activity which is part of a stage in your software release process.
AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software.
http://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html
CodePipeline also supports execution of Lambda functions from Actions in the pipeline. So, as advised before, create a Lambda function to deploy your API but call it from Codepipeline instead of CloudFormation.
Consult this page for details:
http://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html
I was using above approach but it looks to complicated to me just to deploy API gateway. If we are changing name of the resources then it takes time to delete and recreate the resources which increases down time for you application.
I'm following below approach to deploy API gateway to the stage using AWS CLI and it is not affecting the deployment with Cloudformation stack.
What I'm doing is, running below AWS CLI command after deployment is completed for API Gateway. It will update the existing stage with latest updates.
aws apigateway create-deployment --rest-api-id tztstixfwj --stage-name stg --description 'Deployed from CLI'
The answer here is to use the AutoDeploy property of the Stage:
Stage:
Type: AWS::ApiGatewayV2::Stage
Properties:
StageName: v1
Description: 'API Version 1'
ApiId: !Ref: myApi
AutoDeploy: true
Note that the 'DeploymentId' property must be unspecified when using 'AutoDeploy'.
See documentation, here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html
From the blogspot post linked by TheClassic (best answer so far!), you have to keep in mind that if you aren't generating your templates with something that can insert a valid timestamp in place of $TIMESTAMP$, you must update that manually with a time stamp or otherwise unique ID. Here is my functional example, it successfully deletes the existing deployment and creates a new one, but I will have to update those unique values manually when I want to create another change set:
rDeployment05012019355:
Type: AWS::ApiGateway::Deployment
DependsOn: rApiGetMethod
Properties:
RestApiId:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-RestApi'
StageName: !Ref pStageName
rCustomDomainPath:
Type: AWS::ApiGateway::BasePathMapping
DependsOn: [rDeployment05012019355]
Properties:
BasePath: !Ref pPathPart
Stage: !Ref pStageName
DomainName:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-CustomDomainName'
RestApiId:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-RestApi'
I may be late, but here are the options which which you do a redeployment if a API resources changes, may be helpful to people who still looking for options -
Try AutoDeploy to true. If you are using V2 version of deployment. Note that you need to have APIGW created through V2. V1 and V2 are not compatible to each other. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-autodeploy
Lambda backed custom resource, Lambda inturn call createDeployment API - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html
CodePipeline that has an action that calls a Lambda Function much like the Custom Resource would - https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html
SAM(Serverless Application Model) follows a similar syntax to CloudFormation which simplifies the resource creation into abstractions and uses those to build and deploy a normal CloudFormation template. https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html
If you are using any abstraction layer to cloudformation like Sceptre, you can have a hook to call createDeployment after any-update to the resource https://sceptre.cloudreach.com/2.3.0/docs/hooks.html
I gone with third option since I kept using Sceptre for Cloudformation deployment. Implementing hooks in sceptre is easy as well.
Reading through this article, I did not come to a conclusion right away, as the information here is stretched through multiple sources. I try to sum up all the findings from here (and linked source) as my personal testing to help others avoid the hunt.
Important to know is that each API always has a dedicated URL. The associated stages only get a separate suffix. Updating the deployment does not change the URL, recreating the API does.
API
├─ RestAPI (incl. Resource, Methods etc)
├─ Deployment
├─ Stage - v1 https://6s...com/v1
├─ Stage - v2 https://6s...com/v2
Relation stage and deployment:
To deploy AWS API Gateway through CloudFormation (Cfn) you need a RestApi-Cfn-Resource and a Deployment-Cfn-Resource. If you give the Deployment-Resource a stage name, the deployment automatically creates a deployment on top of the "normal" creation. If you leave this out, the API is created without any stage. Either way, if you have a deployment, you can add n-stages to a deployment by linking the two, but a stage and its API always has only one deployment.
Updating simple API:
Now if you want to update this "simple API" just consisting of a RestAPI plus a deployment you face the issue, that if the deployment has a stage name - it can not be updated as it already "exists". To detect that the deployment has to be updated in the first place, you have to either add a timestamp or hash to the deployment resource name in CloudFormation else there is even no update triggered.
Solving the deployment update:
To now enable updating the deployment, you have to split deployment and stage up into separate Cfn-Resources. Meaning, you remove the stage name from the Deployment-Cfn-Resource and create a new Stage-Cfn-Resource which references the deployment resource. This way you can update the deployment. Still, the stage - the part you reference via URL - is not automatically updated.
Propagating the update from the deployment to your stages:
Now that we can update the deployment - aka the blueprint of the API - we can propagate the change to its respective stage. This step AS OF MY KNOWLEDGE is not possible using CloudFormation. Therefore, to trigger the update you either need to add a "custom resource" our you do it manually. Other "none" CloudFormation ways are summed up by #Athi's answer above, but no solution for me as I want to limit the used tooling.
If anybody has an example for the Lambda update, please feel free to ping me - then I would add it here. The links I found so far only reference a plain template.
I hope this helped others understanding the context a bit better.
Sources:
Problem description with Cfn-template, 2
Adding timestamp to deployment resource, 2
Using CodePipeline as a solution
Related question and CLI update answer
Related terraform issue
Related AWS forum thread
This worked for me :
cfn.yml
APIGatewayStage:
Type: 'AWS::ApiGateway::Stage'
Properties:
StageName: !Ref Environment
DeploymentId: !Ref APIGatewayDeployment$TIMESTAMP$
RestApiId: !Ref APIGatewayRestAPI
Variables:
lambdaAlias: !Ref Environment
MethodSettings:
- ResourcePath: '/*'
DataTraceEnabled: true
HttpMethod: "*"
LoggingLevel: INFO
MetricsEnabled: true
DependsOn:
- liveLocationsAPIGatewayMethod
- testJTAPIGatewayMethod
APIGatewayDeployment$TIMESTAMP$:
Type: 'AWS::ApiGateway::Deployment'
Properties:
RestApiId: !Ref APIGatewayRestAPI
DependsOn:
- liveLocationsAPIGatewayMethod
- testJTAPIGatewayMethod
bitbucket-pipelines.yml
script:
- python3 deploy_api.py
deploy_api.py
import time
file_name = 'infra/cfn.yml'
ts = str(time.time()).split(".")[0]
print(ts)
with open(file_name, 'r') as file :
filedata = file.read()
filedata = filedata.replace('$TIMESTAMP$', ts)
with open(file_name, 'w') as file:
file.write(filedata)
========================================================================
Read this for more information: https://currentlyunnamed-theclassic.blogspot.com/2018/12/mastering-cloudformation-for-api.html
If you have something to do the $TIMESTAMP$ replacement, I'd probably go with that as it's cleaner and you don't have to do any manual API Gateway management.
I have found that the other solutions posted here mostly do the job with one major caveat - you can't manage your Stage and Deployment separately in CloudFormation because whenever you deploy your API Gateway, you have some sort of downtime between when you deploy the API and when the secondary process (custom resource / lambda, code pipeline, what have you) creates your new deployment. This downtime is because CloudFormation only ever has the initial deployment tied to the Stage. So when you make a change to the Stage and deploy, it reverts back to the initial deployment until your secondary process creates your new deployment.
*** Note that if you are specifying a StageName on your Deployment resource, and not explicitly managing a Stage resource, the other solutions will work.
In my case, I don't have that $TIMESTAMP$ replacement piece, and I needed to manage my Stage separately so I could do things like enable caching, so I had to find another way. So the workflow and relevant CF pieces are as follows
Before triggering the CF update, see if the stack you're about to update already exists. Set stack_exists: true|false
Pass that stack_exists variable in to your CF template(s), all the way down to the stack that creates the Deployment and Stage
The following condition:
Conditions:
StackExists: !Equals [!Ref StackAlreadyExists, "True"]
The following Deployment and Stage:
# Only used for initial creation, secondary process re-creates this
Deployment:
DeletionPolicy: Retain
Type: AWS::ApiGateway::Deployment
Properties:
Description: "Initial deployment"
RestApiId: ...
Stage:
Type: AWS::ApiGateway::Stage
Properties:
DeploymentId: !If
- StackExists
- !Ref AWS::NoValue
- !Ref Deployment
RestApiId: ...
StageName: ...
Secondary process that does the following:
# looks up `apiId` and `stageName` and sets variables
CURRENT_DEPLOYMENT_ID=$(aws apigateway get-stage --rest-api-id <apiId> --stage-name <stageName> --query 'deploymentId' --output text)
aws apigateway create-deployment --rest-api-id <apiId> --stage-name <stageName>
aws apigateway delete-deployment --rest-api-id <apiId> --deployment-id ${CURRENT_DEPLOYMENT_ID}
Use SAM
AWS::Serverless::Api
This does the deployment for you when it does the Transformation