I've one scenario like I want to invoke one lambda function by cloud custodian and want to pass newly created bucket name to that lambda function. is there any way to pass parameters to the lambda function from the custodian event? Thanks
-- below is my cloud custodian policy:-
policies:
name: lambda-s3-configure-standards-real-time
resource: aws.lambda
description: |
This policy is triggered when a new S3 bucket is created and it will invoke another lambda.
mode:
type: cloudtrail
events:
- CreateBucket
role: some-role
timeout: 200
actions:
- type: invoke-lambda
function: Lambda-function-name
Have you tried checking the lambda function's event dictionary that is passed as the payload to your invoked lambda function? I believe the bucket name should already be present as part of the payload readily.
As an example, when the bucket is created via console your payload should contain the Bucket Name at $.detail.requestParameters.bucketName
Related
I have set up CI/CD for an AWS Lambda function such that the new version is automatically deployed using GitHub actions. By default, AWS creates a new Lambda ID (and thus URL) for this lambda function. This means that the front-end portion of my code will need to be updated to contain the updated URL. Is there a way to automatically perform such updating? By e.g. saving the URL as an environment variable and inserting it in the code with a GitHub action?
Or is there alternatively a way to re-use the old Lambda function URL for new deployments?
You can get the updated Lambda URL by using SAM template outputs as follows:
Resources:
MyFunction:
Type: 'AWS::Serverless::Function'
Outputs:
MyFunctionUrlEndpoint:
Description: "My Lambda Function URL Endpoint"
Value: !GetAtt MyFunctionUrl.FunctionUrl
Then you can access the output as described in this answer:
aws cloudformation describe-stacks --stack-name stack_name --query 'Stacks[0].Outputs[?OutputKey==`MyFunctionUrlEndpoint`].OutputValue' --output text
which can then be further processed in e.g. your front-end code.
There may be easier methods, but this should work!
I have set up 2 lambda functions, deployed with AWS SAM. The first one uses the JS AWS SDK to run putRule and putTarget to trigger the second lambda with a cron job. When I run the first lambda, I see both the rule and target correctly set up in EventsBridge.
I also create the following permission for the second Lambda in my AWS SAM template
InvokePermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref MyLambda
Action: lambda:InvokeFunction
Principal: 'events.amazonaws.com'
and can see the Policy in the console
The only result I see of this cron event (at the timestamp I've chosen for the rule) is a failed invocation of the second Lambda, and CloudWatch doesn't provide any useful information
Any idea of why this is failing or how to retrieve any error? Might "events.amazonaws.com" be the wrong Principal for that?
I am looking into EventSourceMapping but I can't see my case anywhere in the docs
I want to create a Lambda function that is triggered from a S3 bucket created within the same CloudFormation stack but cannot get the syntax quite right.
The event should only be fired when an object is uploaded to /uploads. I also need to specify some bucket properties (CORS).
S3 bucket definition in resources
resources:
Resources:
myBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket
# CORS properties...
Event in function definition:
events:
- s3:
bucket: myBucket
event: s3:ObjectCreated:Put
rules:
- prefix: uploads/
I do not want to use existing: true because it creates some helper objects for this simple task. I cannot find any documentation or examples that fit my case.
The existing:true flag only relates to S3 buckets created outside of your serverless project, for buckets that already exist, which is not the case here.
The situation you face is that you can't use the typical serverless framework convenience of defining the bucket in the Lambda event trigger, like this:
functions:
users:
handler: users.handler
events:
- s3:
bucket: photos
event: s3:ObjectRemoved:*
The reason that you can't use that method is that it creates the photos bucket and does not allow you to supply additional bucket configuration, e.g. CORS or bucket policy.
The solution to this is to create the S3 bucket in the S3 provider configuration, with CORS policy, and then refer to the bucket from your Lambda function event configuration. For example:
provider:
s3:
photosBucket:
name: photos
versioningConfiguration:
Status: Enabled
corsConfiguration:
CorsRules
- rule1 here
I have a problem that so far I'm unable to identify the root cause.
I have an AWS step machine that should be invoked once a file is uploaded to an S3 bucket.
So far when I upload the file to the S3 bucket, the lambda function that is defined in the StartAt key (StartAt: ImgUploadedEvent) starts as I can see in the lambda logs.
Here is the code:
stepFunctions:
stateMachines:
ValidateImageStateMachine:
loggingConfig:
level: ALL
includeExecutionData: true
destinations:
- Fn::GetAtt: [ StepFuncLogGroup, Arn ]
definition:
Comment: "This state function validates the images after users upload them to S3"
StartAt: ImgUploadedEvent
States:
ImgUploadedEvent:
Type: Task
Resource:
Fn::GetAtt: [ImgUploaded, Arn]
End: true
Below is the lambda function that is declared as the start of the StepMachine
This lambda function as I can see from the logs indeed get called once I modified an Object in S3
functions:
ImgUploaded:
handler: src/stepfunctions/imageWasUploadedEvent.handler
events:
- s3:
bucket: !Ref AttachmentsBucket
existing: true
iamRoleStatements:
- Effect: "Allow"
Action:
- "states:StartExecution"
Resource:
- "*"
To check that the Step Function was working I created a log group and added it to the Step Function.
resources:
Resources:
StepFuncLogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: /aws/stepfunctions/${self:service}-${self:provider.stage}
I see this cloud watch log group correctly associated with the Step function in the AWS Console.
However when I upload an object to S3, I do see in the logs of the lambda functions that it was invoked, but I can not see any logs on the Step Function log group
My question is:
Is the Step Function indeed working and it is just an issue with the logs in the Step Function?
Or it is that the Step Function itself is not working and the lambda function is working just a lambda function totally independent of the Step Function?
What I have to do so the lambda function gets trigger as part of the Step Function?
BR
After studying how StepFunctions work I finally arrived to the conclusion that this is a wrong pattern.
In no place on the documentation of the plugins or Amazon it says that what I did is a pattern
StepFunctions can be started by events on CloudWatch and those events could be originated on a change on S3. That is not what I did here
There is no a direct link between an action on S3 and a StepFunction
The link in AWS documentation could confuse someone to think otherwise. Here is the title Starting a State Machine Execution in Response to Amazon S3 Events
But the state machine does not start because an S3 events but because the CloudWatch log that this event generates
A lambda proxy function is a good way of invoking an State Machine. This is a easy to use pattern and very common as we can use it with SQS etc
So the correct response to this question is
The state machine does not start because it is never invoked. We just called a lambda function that was used as the StartAt for an StateMachine. This does mean I invoked the State Machine.
That is the reason why there is no logs for the state machine meanwhile there are correct logs for the lambda function
Hope this response helps
I will add more details and reference to this response
BR
Or it is that the Step Function itself is not working and the lambda function is working just a lambda function totally independent of the Step Function?
To verify whether lambda was invoked as part of step function or not can't you just check execution history from the step function console. Also unless you have explicitly configured s3 to publish events to lambda, your lambda will not be automatically invoked upon uploading files to s3.
What I have to do so the lambda function gets trigger as part of the Step Function?
To be able to call trigger step function on file-upload to s3 you can follow this tutorial: https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-s3.html
I am trying to create a lambda function from a CloudFormation template based on this example:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-lambda.html
As can be seen from this link:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html
there is no way to add a trigger for the lambda function (like a S3 upload trigger).
Is there a workaround to specify the trigger while writing the template?
You can use cloudwatch rule to trigger your lambda function :
AWSTemplateFormatVersion: '2010-09-09'
Resources:
MyCloudWatchRule:
Type: "AWS::Events::Rule"
Properties:
Description: "Rule to trigger lambda"
Name: "MyCloudWatchRule"
EventPattern: <Provide Valid JSON Event pattern>
State: "ENABLED"
Targets:
- Arn: "arn:aws:lambda:us-west-2:12345678:function:MyLambdaFunction"
Id: "1234567-acvd-awse-kllpk-123456789"
Ref :
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEventsandEventPatterns.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html#aws-resource-events-rule-syntax
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-events-rule-target.html
It's been a while so I imagine you've solved the problem, but I'll put in my 2 cents to help others.
It's best to use SAM (Serverless Application Model) for this kind of things. So use AWS::Serverless::Function instead of AWS::Lambda::Function
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html
In there, you can specify an EventSource which accepts the following possible values:
S3
SNS
Kinesis
DynamoDB
SQS
Api
Schedule
CloudWatchEvent
CloudWatchLogs
IoTRule
AlexaSkill
Cognito
HttpApi
SAM does the rest of the work. Follow this guide for the rest of the details:
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-deploying.html
Nowadays, this issue is fixed by Amazon:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html#aws-resource-events-rule--examples
Just create Lambda permissions like in the example.
Lambda function can be triggered by several AWS resources such as S3, SNS, SQS, API, etc. Checkout for the full list at AWS docs
I suggest you use Altostra Designer, which let you create and configure Lambda Function super quick and also choose what will trigger it.
You need to add a NotificationConfiguration to the S3 bucket definition. However, this will lead to a circular dependency where the S3 bucket refers to the Lambda function and the Lambda function refers to the S3 bucket.
To avoid this circular dependency, create all resources (including the S3 bucket and the Lambda function) without specifying the notification configuration. Then, after you have created your stack, update the template with a notification configuration and then update the stack.
Here is a SAM based YAML example for CloudWatch log group trigger
lambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri:
Bucket: someBucket
Key: someKey
Description: someDescription
Handler: function.lambda_handler
MemorySize:
Ref: MemorySize
Runtime: python3.7
Role: !GetAtt 'iamRole.Arn'
Timeout:
Ref: Timeout
Events:
NRSubscription0:
Type: CloudWatchLogs
Properties:
LogGroupName: 'someLogGroupName'
FilterPattern: "" #Match everything
For S3 example event see https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-function-s3.html