AWS lambda read to outputs from CloudFormation - amazon-web-services

I would like to read an output parameter from a CF stack dynamically in a Lambda#Edge (javascript).
Can this be achieved using the amplify library or some other way? Any examples of how to do this?
I believe I need to give the script the DescribeStack permission to do this.

Related

How to capture serverless website screenshot using AWS Lambda?

How to run an AWS Lambda function on a regular basis that saves screenshot of the webpage of a given specific URL?
Because your question is so generic - I'm just going to provide you the resources that I used to produce a similar lambda function for myself.
Getting puppeteer into Lambda
https://github.com/alixaxel/chrome-aws-lambda
Taking screenshots in Puppeteer
https://www.scrapehero.com/how-to-take-screenshots-of-a-web-page-using-puppeteer/
Then just run your job on a cron or queue it up however you would like and then dump the screenshot into S3
Good luck!

How to import a text from s3 to lambda through cloudformation?

I have a key that is being shared among different services and it is currently stored in an s3 bucket inside a text file.
My goal is to read that variable and pass it to my lambda service through cloudformation.
for an ec2 instance it was easy because I could download the file and read it, and that was easily achievable by putting the scripts inside my cloudformation json file. But I don't have any idea how to do it for my lambdas....!
I tried to put my credentials in gitlab pipeline but because of the access permissions it doesn't let gitlab pass it on, so my best and least expensive option right now is to do it in cloud formation.
The easiest method would be to have the Lambda function read the information from Amazon S3.
The only way to get CloudFormation to "read" some information from Amazon S3 would be to create a Custom Resource, which involves writing an AWS Lambda function. However, since you already have a Lambda function, it would be easier to simply have that function read the object.
It's worth mentioning that, rather than storing such information in Amazon S3, you could use the AWS Systems Manager Parameter Store, which is a great place to store configuration information. Your various applications can then use Parameter Store to store and retrieve the configuration. CloudFormation can also access the Parameter Store.

How do I add a Lambda Function with an S3 Trigger in CloudFormation?

I've been working with CloudFormation YAML for awhile and have found it to be comprehensive - until now. I'm struggling in trying to use SAM/CloudFormation to create a Lambda function that is triggered whenever an object is added to an existing S3 bucket.
All of the examples I've seen thus far seem to require that you create the bucket in the same CloudFormation script as you create the Lambda function. This doesn't work for me, because we have a design goal to be able to use CloudFormation redeploy our entire stack to different regions or AWS accounts and quickly stand up our application. S3 bucket names must be globally unique, so if I create the bucket in CloudFormation, the script will break when I try to deploy it to a different region/account. I could probably get around this by creating buckets with the account name/region in the name, but that's just not desirable from a bucket sprawl perspective.
So, does anyone have a solution for creating a Lambda function in CloudFormation that is triggered by objects being written to an existing S3 bucket?
Thanks!
This is impossible, according to the SAM team. This is something which the underlying CloudFormation service can't do.
There is a possible workaround, if you implement a Custom resource which would trigger a separate Lambda function to modify the existing bucket and link it to the Lambda function that you want to deploy.
As "implement a Custom Resource" isn't very specific: Here is an AWS github repo with scaffold code to help write it, and then you declare something like the following in your template (where LambdaToBucket) is the custom function you wrote. I've found that you need to configure two things in that function: one is a bucket notification configuration on the bucket (saying tell Lambda about changes), the other is a Lambda Permission on the function (saying allow invocations from S3).
Resources:
JoinLambdaToBucket:
Type: Custom::JoinLambdaToExistingBucket
Properties:
ServiceToken: !GetAtt LambdaToBucket.Arn

AWS SAM - how to handle a large number of endpoints

We're building an API using AWS SAM. Build on the Lambda Node Template in CodeStar. Things were going well until our template.yml file became too big. Whenever the code is pushed and CloudFormation starts to execute the change set and create a stack for the SAM endpoints, it fails and rolls back to the last successful build.
It seems that we have too many resources that exceeds the CloudFormation limit per stack.
I tried splitting the template file and edited the buildspec to handle two template files and do two AWS CloudFormation package commands and added another artifact. But it didn't work either. As only the first template is recognized and only one stack is created.
I can't find a way to make an automated deployment that creates multiple stacks.
I'd appreciate some input into this and suggestions to handle such a scenario.
Thanks in advance.
You should try using the nested stacks pattern. Instead of splitting your current stack into multiple parallel stacks, you will create a parent stack that will in turn create multiple child stacks.
More information here.
AWS SAM (as of SAM v1.9.0) supports nested applications which map to nested CloudFormation stacks which gets around the 200 resource limit. (AWS::Serverless::Application transforms into a AWS::CloudFormation::Stack)
https://github.com/awslabs/serverless-application-model/releases/tag/v1.9.0
The main subject to see is what is the components you have in your sam template ? is there any dependencies ? is all Functions shares the same API Gateway or not ? is all functions access DynamoDB table ?
In my case, I split the SAM by API [ API Gateway + functions ( CRUD)] in a mono repo way, each folder contains its sam template.
If you have a shared service like Redis, or SNS, SQS, you can have a separate stack with the export import Feature to import the ARN of the service.

Add trigger to lambda function automatically

I managed to deploy a AWS Lambda function using Travis, however I also need to add a trigger to it (in my case it's Kinesis). Has anyone done it? If there is no out of the box way to do it with travis I suppose I need to add the script using AWS CLI? Anyone has done it and could share some advice or where I could take a look as reference?
I wanted primarily to add the trigger with Travis but Terraform makes it much simpler.
https://www.terraform.io/docs/providers/aws/r/lambda_event_source_mapping.html
So I can create my IAM roles, Kinesis Stream and the event mapping between Kinesis and Lambda using terraform https://www.terraform.io/docs/providers/aws/r/lambda_event_source_mapping.html
If you have any different ways that you believe is better, please do not hesitate in adding here. Thanks.