We are migrating REST API service from EC2 to Lambda/API Gateway (to lower billing) using the AWS - SAM feature. This service is consumed only by internal application(INTRANET). We don't have VPN connectivity between on-premise and AWS. Each function is housed in a separate folder which includes a YAML file template. When deployed using the same stack name it deletes the previous function. We tried to use,
DeletionPolicy: Retain
which errored out ,
'property DeletionPolicy not defined for resource of type
AWS::Serverless::Function'
Our requirement is, to have a common base URL without using R53 (if possible).
Is there a better way to do this?
CloudFormation attributes, such as DeletionPolicy, are not defined inside the Properties section. You may need to un-indent DeletionPolicy so it is not defined within the Properties section
I believe you don't need to retain the old lambda, it's fine to delete the old as you deploy new changes.
What needed is to tied the Lambda with API gateway.
In other words, your cloudformation template should have Lambda resources and for API Gateway point to the like
Uri: !Sub 'arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${LambdaFunction.Arn}/invocations'
within AWS::ApiGateway::Method resources
I quoted my code block from AWS Samples on github
In case you have different cloud formation templates you might consider to Output section in cloudformation templates to export resources
Related
I see that one SAM deployment over another, that the previous resources are deleted, and new resources are created. Now, the new resources that are created are actually not the same resources, and have a different ARN than before.
This causes some problems that I am facing right now. Say that we have non SAM resources which require set for for the SAM resources.
Like for example, we have an SNS which is subscribed by our API gateway. Now after the deployment the ARN of the API gateway changes, and we'd require to subscribe again.
There are more problems like this that I am facing, but this is the gist of it.
Any help appreciated!
The ARN of the resource can remain same if the name of the resource is specified in the template. Since the name now remains the same, the ARN too will remain the same, even after multiple deployments.
I would like to automate setting up the collection of AWS Application Load Balancer logs using Sumo Logic as documented here:
https://help.sumologic.com/07Sumo-Logic-Apps/01Amazon_and_AWS/AWS_Elastic_Load_Balancer_-_Application/01_Collect_Logs_for_the_AWS_Elastic_Load_Balancer_Application_App
This involves creating a bucket, creating a Sumo Logic hosted collector with an S3 source, taking the URL of the collector source provided by Sumo Logic and then creating an SNS Topic with an HTTP subscription where the subscription URL is the one provided by the Sumo Logic source.
The issue with this is that the SumoLogic source URL is not known at synthesis time. The Bucket must be deployed, then the Sumlogic things created, then the SNS topic created.
As best I can figure, I will have to do this through separate invocations of CDK using separate stacks, which is slower. One stack to create the bucket. After deploying that stack, use the Sumo Logic api to create or affirm prior creation of the Sumo Logic hosted collector and source, another CDK deploy to create the SNS topic and HTTP subscription.
I was just wondering if anyone knew of a better way to do this, perhaps some sort of deploy time hook that could be used.
There are two ways(which I know of) in which you can automate the collection of AWS Application Load Balancer.
Using CloudFormation
Sumo Logic have a template that creates the Collection process for AWS Application Load Balancer which is part of the AWS Observability Solution. You can fork the repository and can create your own CloudFormation template after removing resources you do not require.
Sumo Logic also have a Serverless Application which auto enable Access logging for existing and new (which are created after application installation) load balancer. Example template which uses the application.
Using Terraform
As mentioned by Grzegorz, you can create a terraform script also.
Disclaimer: Currently employed by Sumo Logic.
You could try using a Custom Resource SDK Call to trigger a lambda that does what you want.
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_custom-resources.AwsSdkCall.html
(I know this is not a perfect answer as it suggests to use another tool, yet I believe it fulfills the needs expressed in the question)
How about using Terraform?
sumologic_s3_source in Terraform is able to create the source at Sumo AND output its URL for other uses within Terraform - e.g. to set up AWS resources.
The docs on this even mention URL being one of the returned values:
url - The HTTP endpoint to use with SNS to notify Sumo Logic of new
files.
Disclaimer: I am currently employed by Sumo Logic.
Here is the thing, I have a serverless project that creates many AWS resources (Lambdas, API Gateway, etc), now I need to change the tags I used a couple of months ago, but when I try to run the serverless I see this message: " A version for this Lambda function exists ( 6 ). Modify the function to create a new version..". I have been reading and applying a couple of different workarounds but same issue.
Does any body have seen this behavior? Is there a way to retag all resources withouth delete the whole stack or doing that manually?
Thanks for your recommendations.
You can use a serverless plugin (serverless-plugin-resource-tagging). it will tag your Lambda function, Dynamo Tables, Bucket, Stream, API Gateway, and CloudFront resources. The way it works is you have to provide stacksTags having your tags inside under the Provider section of serverless.
provider:
stackTags:
STACK: "${self:service}"
PRODUCT: "Product Name"
COPYRIGHT: "Copyright"
You can also update tags value using this plugin.
As a DevOps guy I wanted to use the same template to provision both Dev and Prod stacks... Where dev stacks should not have any DeletionPolicy but Prod stacks should utilize a DeletionPolicy
So, at first sight CFT gives an ok tooling for this but.... there is no possibility to parametrize S3 DeletionPolicy (that I've been able to locate at least)...
Here's some threads I dug up
https://forums.aws.amazon.com/message.jspa?messageID=560586
https://www.unixdaemon.net/cloud/cloudformation-annoyance-deletion-policy-parameters/
The suggested workaround from AWS was to make the whole resource conditional, which leads us duplicating the resource and create a „Deletable and „Undeletable versions of it and all the depending resources should handle that condition...
This seems wonky and bloated, is there a way to parameterize this or a better methodology to accomplish my end goal?
Doesn't seem like there's an option in CFT other than resource duplication.
What you can do is create a Lambda with a Python script that would setup the S3 deletion policy. That Lambda function can be triggered through SNS during CloudFormation stack creation. Here is described how this can be configured:
Is it possible to trigger a lambda on creation from CloudFormation template
But in your particular case I'd go with resource duplication in same CFT.
I've been working with CloudFormation YAML for awhile and have found it to be comprehensive - until now. I'm struggling in trying to use SAM/CloudFormation to create a Lambda function that is triggered whenever an object is added to an existing S3 bucket.
All of the examples I've seen thus far seem to require that you create the bucket in the same CloudFormation script as you create the Lambda function. This doesn't work for me, because we have a design goal to be able to use CloudFormation redeploy our entire stack to different regions or AWS accounts and quickly stand up our application. S3 bucket names must be globally unique, so if I create the bucket in CloudFormation, the script will break when I try to deploy it to a different region/account. I could probably get around this by creating buckets with the account name/region in the name, but that's just not desirable from a bucket sprawl perspective.
So, does anyone have a solution for creating a Lambda function in CloudFormation that is triggered by objects being written to an existing S3 bucket?
Thanks!
This is impossible, according to the SAM team. This is something which the underlying CloudFormation service can't do.
There is a possible workaround, if you implement a Custom resource which would trigger a separate Lambda function to modify the existing bucket and link it to the Lambda function that you want to deploy.
As "implement a Custom Resource" isn't very specific: Here is an AWS github repo with scaffold code to help write it, and then you declare something like the following in your template (where LambdaToBucket) is the custom function you wrote. I've found that you need to configure two things in that function: one is a bucket notification configuration on the bucket (saying tell Lambda about changes), the other is a Lambda Permission on the function (saying allow invocations from S3).
Resources:
JoinLambdaToBucket:
Type: Custom::JoinLambdaToExistingBucket
Properties:
ServiceToken: !GetAtt LambdaToBucket.Arn