Unable to trigger AWS Lambda by upload to AWS S3 - amazon-web-services

I am trying to build a Kibana dashboard fed with twitter data collected via AWS Kinesis firehose where data passes into an S3 bucket which triggers a Lambda function which passes the data to AWS Elastic Search and then to Kibana. I am following this blog https://aws.amazon.com/blogs/big-data/building-a-near-real-time-discovery-platform-with-aws/
The data is loading into the S3 bucket correctly but it never arrives in Kibana, I believe this is because the Lambda function is not being triggered by events in S3 as I would have hoped (there are no invocations or logs). I think this is because I have not set permissions properly. The Lambda function can be invoked manually by the test event.
On the Lambda function page I chose an existing role which I called lambda_s3_exec_role which has the AWSLambdaExecute policy attached to it but I feel I'm missing something else more specific to S3. I have been unable to follow this line in the blog in the create lambda function section because I do not recognise those options:
"10. Choose lambda_s3_exec_role (if this value does not exist, choose Create new role S3 execution role)."
Can anyone help me create the appropriate role/policy for the Lambda function, or spot what the problem may be?
From view permissions on the Lambda function I currently have:
FUNCTION POLICY
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "****",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "****",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:::****"
}
}
}
]
}
EXECUTION ROLE
{
"roleName": "lambda_s3_exec_role",
"policies": [
{
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::*"
}
]
},
"name": "AWSLambdaExecute",
"id": "****",
"type": "managed",
"arn": "arn:aws:iam::aws:policy/AWSLambdaExecute"
}
]
}

The permissions you have listed look OK so I am going to try provide some steps that might help find the issue as it is difficult to understand specifically where your issue might be.
Does the execution role have the trust relationship with a trusted entity of lambda.amazonaws.com
Does your event prefix match the prefix in firehose. In the tutorial they are both twitter/raw-data/. If firehose is writing to a path that isn't the event prefix then the event won't be invoked.
Does the lambda trigger any errors when you manually invoke it
Does the lambda write to the logs when you manually invoke it
Test the lambda using dummy data (example data below)
CLI
aws lambda invoke \
--invocation-type RequestResponse \
--function-name helloworld \
--region region \
--log-type Tail \
--payload file://dummy_event.json \
--profile adminuser \
outputfile.txt
Example data
source
dummy_event.json
{
"Records":[
{
"eventVersion":"2.0",
"eventSource":"aws:s3",
"awsRegion":"us-west-2",
"eventTime":"1970-01-01T00:00:00.000Z",
"eventName":"ObjectCreated:Put",
"userIdentity":{
"principalId":"AIDAJDPLRKLG7UEXAMPLE"
},
"requestParameters":{
"sourceIPAddress":"127.0.0.1"
},
"responseElements":{
"x-amz-request-id":"C3D13FE58DE4C810",
"x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
},
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"testConfigRule",
"bucket":{
"name":"sourcebucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::sourcebucket"
},
"object":{
"key":"HappyFace.jpg",
"size":1024,
"eTag":"d41d8cd98f00b204e9800998ecf8427e",
"versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
}
}
}
]
}

Struggled with this for a long time and eventually realized that your rule that triggers the lambda cannot have exactly the same name as the lambda itself or it won't work.

Related

AWS: Trigger step function state machine on s3 object creation using Event Bridge Not Working

I enabled notifications for Amazon EventBridge on my s3 bucket.
Then I created an EventBridge rule with the following event pattern:
{
"detail": {
"bucket": {
"name": ["arn:aws:s3:::my-bucket"]
}
},
"detail-type": ["Object Created"],
"source": ["aws.s3"]
}
Then I added my state machine as the target of this rule. I also attached an IAM role with the following policy for this event target.
"Statement": [
{
"Effect": "Allow",
"Action": [ "states:StartExecution" ],
"Resource": [ "arn:aws:states:*:*:stateMachine:*" ]
}
]
Then I attached the following policy to my state machine step function as well:
{
"Action": "events:*",
"Resource": "arn:aws:events:us-east-1:my-account-id:event-bus/default",
"Effect": "Allow"
}
After doing all of this, still my state machine is not getting invoked.
What am I missing here? How can I debug where the issue might be?
Have you checked if your custom pattern matches the event?
I think you do not need arn in the name.
Try with
{
"detail": {
"bucket": {
"name": ["my-bucket"]
}
},
"detail-type": ["Object Created"],
"source": ["aws.s3"]
}

Jobs from specific AWS Batch permissions

How to allow only jobs from a certain AWS Batch queue (and based on a specific job description) to publish to the specific SNS topic?
I though about attaching to jobs IAM policy with the statement:
{
"Effect": "Allow",
"Action": "sns:Publish",
"Resource": ["<arn of the specific SNS topic"]
"Condition": {"ArnEquals": {"aws:SourceArn": "arn:aws:???"}}
}
But what should be the source ARN? ARN of the job queue, ARN of the job definition? Or maybe this should be set up completely differently?
I had a similar experience when worked with AWS Batch jobs executed in Fargate containers which follow the same principles as ECS in scope of assigning roles and permissions.
If you are going to publish messages into specific topic from the code executed inside of your container, then you should create a role with necessary permissions and then use its ARN in the JobRoleArn property of your job definition.
For example (there can be minor mistakes in the code below, but I am just trying to explain the concept here):
Role cloudformation:
"roleresourceID": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": "*"
}
}
],
"Version": "2012-10-17"
},
"RoleName": "your-job-role"
}
}
Policy attached to the role:
"policyresourceid": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyDocument": {
"Statement": [
{
"Action": "sns:Publish",
"Effect": "Allow",
"Resource": "<arn of the specific SNS topic>"
}
],
"Version": "2012-10-17"
},
"PolicyName": "your-job-role-policy",
"Roles": [
{
"Ref": "roleresourceID"
}
]
}
}
And finally attach role to the Job Definition:
....other job definition properties
"JobRoleArn": {
"Fn::GetAtt": [
"roleresourceID",
"Arn"
]
}
Of course you may structure and format roles and policies in way you like, the main idea of this explanation is that you need to attach proper role using JobRoleArn property of your job definition.

Cloudwatch rule that trigger SNS in case pattern do NOT match completely

Is it possible to create CloudWatch Rule that triggers SNS, when pattern do NOT match completely?
With following example I hope that question will be more clear:
{
"source": [
"aws.ec2"
],
"detail": {
"eventSource": [
"ec2.amazonaws.com"
],
"eventName": [
"RunInstances"
]
}
}
Additionally I want to specify region: "awsRegion": "eu-central-1" but (here is tricky part) want SNS to be triggered when awsRegion is NOT eu-central-1.
Idea is to receive a notification when someone makes a mistake and runs an instance in the wrong region.
Also will add more rules once I know how to do, so the question is not exactly for the region, but general.
Thanks in advance!
TeoVal
It's currently not possible to trigger when a pattern does NOT match.
You will have to receive notifications for all regions on your SNS topic and implement your own logic with a Lambda function subscribed to your topic (or directly with Lambda as a target).
Also note that it's possible to restrict users from making API calls in specific region using a policy similar to the following (restricts call to us-east-1 and eu-central-1 regions):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RegionsRestriction",
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestedRegion": [
"us-east-1",
"eu-central-1"
]
}
}
}
]
}

AWS Lambda S3:::ObjectCreated:Put event returns invalid object key

Good day!
I'm building a lambda function that is supposed to resize images saved in a particular bucket and re-save them in the same bucket under a different prefix (lambda will first check for this prefix to avoid infinite loops). When I attempt to upload the image '123.jpg' for instance, event.Resource[0].s3.object.key under the handler function will return 'undefined2018-02-26-08-40-37-DBAB838DACA3F368'
As you can imagine, this causes my lambda function to crash. Anyone have any ideas on this, please let me know as I've been knocking my head almost a week over this. Please find additional resources below. Note that I have also created a lambda event under s3 bucket settings:-
LOGS for lambda Event
event: { Records:
[ {
eventVersion: '2.0',
eventSource: 'aws:s3',
awsRegion: 'us-east-1',
eventTime: '2018-02-26T08:40:37.281Z',
eventName: 'ObjectCreated:Put',
userIdentity: { principalId: 'XXXXXXXXXXXXX' },
requestParameters: { sourceIPAddress: '8.8.8.8' },
responseElements:
{
'x-amz-request-id': '05465A75942F4593',
'x-amz-id-2': 'GWXnftcTHzfdAOuH40R2LO+h2laQhcO9eeU4JIzsRfYpL3HsDHmxzmqvE6lIlmAfcDO8O+gXU6U='
},
s3: {
s3SchemaVersion: '1.0',
configurationId: '19945d41-71f2-4ae0-9004-b1c6c06b06da',
bucket: {
name: 'sample-bucket-23',
ownerIdentity: { principalId: 'XXXXXXXXXXXX' },
arn: 'arn:aws:s3:::sample-bucket-23'
},
object: {
key: 'undefined2018-02-26-08-40-37-DBAB838DACA3F368',
size: 355,
eTag: '56b6395fe1bfea7cb98cd55d3cba3933',
sequencer: '005A93C8053FEF92A2'
}
}
}
] }
Lambda Access Permissions
{
"roleName": "lambda_full_s3_v2",
"policies": [
{
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::sample-bucket-23"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::sample-bucket-23/*"
]
},
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "arn:aws:logs:*:*:*"
}
]
},
"name": "oneClick_lambda_basic_execution_1519631544835",
"type": "inline"
}
]
That isn't an event triggered by your upload. That is an event triggered by a new log file being written to your bucket, because you have configured your bucket to write its logs to itself, rather than to a different bucket in the same region.
Amazon S3 uses the following object key format for the log objects it uploads in the target bucket:
TargetPrefixYYYY-mm-DD-HH-MM-SS-UniqueString
https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html
You will want to change your event trigger to watch only the prefix for images, reconfigure your Lambda code to ignore entries matching this pattern, or create a separate bucket for catching logs.
(I suspect the appearance of the string undefined at the beginning of the log object key is a console bug when you don't specify a prefix. You could also change your bucket's logging configuration to add a prefix, e.g. logs/ for the log files, if you want them written to this bucket.)

aws batch: submit job using lambda

Context: AWS, S3, Lambda, Batch.
I have a lambda that is triggered when a file is uploaded in a S3 Bucket. I want that the lambda submit a Batch job.
(edit: Between S3 and Lambda everything works fine. The problem is between Lambda and Batch.)
Q: What is the role I have to give to the lambda in order to be able to submit the batch job?
My lambda gets an AccessDeniedException and fail to submit the job when:
const params = {
jobDefinition: BATCH_JOB_DEFINITION,
jobName: BATCH_JOB_NAME,
jobQueue: BATCH_JOB_QUEUE,
};
Batch.submitJob(params).promise() .then .......
It seems that this was the role I was looking for: batch:SubmitJob. Using this role, the lambda was able to submit the job.
iamRoleStatements:
- Effect: Allow
Action:
- batch:SubmitJob
Resource: "arn:aws:batch:*:*:*"
You can Create a Policy like AWS Batch Managed Policy,
The following Policy Allows Admin Access,You can modify it as per your needs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"batch:*",
"cloudwatch:GetMetricStatistics",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeKeyPairs",
"ecs:DescribeClusters",
"ecs:Describe*",
"ecs:List*",
"logs:Describe*",
"logs:Get*",
"logs:TestMetricFilter",
"logs:FilterLogEvents",
"iam:ListInstanceProfiles",
"iam:ListRoles"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["iam:PassRole"],
"Resource": [
"arn:aws:iam::*:role/AWSBatchServiceRole",
"arn:aws:iam::*:role/ecsInstanceRole",
"arn:aws:iam::*:role/iaws-ec2-spot-fleet-role",
"arn:aws:iam::*:role/aws-ec2-spot-fleet-role",
"arn:aws:iam::*:role/AWSBatchJobRole*"
]
}
]
}
Attach the policy to lambda and try it again , Refer AWS Documentation