aws batch: submit job using lambda - amazon-web-services

Context: AWS, S3, Lambda, Batch.
I have a lambda that is triggered when a file is uploaded in a S3 Bucket. I want that the lambda submit a Batch job.
(edit: Between S3 and Lambda everything works fine. The problem is between Lambda and Batch.)
Q: What is the role I have to give to the lambda in order to be able to submit the batch job?
My lambda gets an AccessDeniedException and fail to submit the job when:
const params = {
jobDefinition: BATCH_JOB_DEFINITION,
jobName: BATCH_JOB_NAME,
jobQueue: BATCH_JOB_QUEUE,
};
Batch.submitJob(params).promise() .then .......

It seems that this was the role I was looking for: batch:SubmitJob. Using this role, the lambda was able to submit the job.
iamRoleStatements:
- Effect: Allow
Action:
- batch:SubmitJob
Resource: "arn:aws:batch:*:*:*"

You can Create a Policy like AWS Batch Managed Policy,
The following Policy Allows Admin Access,You can modify it as per your needs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"batch:*",
"cloudwatch:GetMetricStatistics",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeKeyPairs",
"ecs:DescribeClusters",
"ecs:Describe*",
"ecs:List*",
"logs:Describe*",
"logs:Get*",
"logs:TestMetricFilter",
"logs:FilterLogEvents",
"iam:ListInstanceProfiles",
"iam:ListRoles"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["iam:PassRole"],
"Resource": [
"arn:aws:iam::*:role/AWSBatchServiceRole",
"arn:aws:iam::*:role/ecsInstanceRole",
"arn:aws:iam::*:role/iaws-ec2-spot-fleet-role",
"arn:aws:iam::*:role/aws-ec2-spot-fleet-role",
"arn:aws:iam::*:role/AWSBatchJobRole*"
]
}
]
}
Attach the policy to lambda and try it again , Refer AWS Documentation

Related

How to add IAM permission to cloudfront in order to associate lambda#edge?

I am trying to update my CloudFront distribution using CDK. While updating, it mentions this error message.
Lambda#Edge cannot retrieve the specified Lambda function. Update the IAM policy to add permission: lambda:GetFunction for resource: arn:aws:lambda:us-east-1:xxxxxxxx:function:edge-lambda-stack-xxxxxxx-xxxxxxxx-xxxxxxx:1
After inspecting, i found this aws docs link https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html
However i am unable to understand where to add these permissions, can somebody guide me where to add lambda:GetFunction permission.
CDK Code
const uriRedirector = new cloudfront.experimental.EdgeFunction(
this,
'UriRedirector',
{
code: lambda.Code.fromAsset('dist/events/object-cache/uri-redirector'),
runtime: lambda.Runtime.NODEJS_14_X,
handler: 'index.handle',
}
)
this.distribution = new cloudfront.Distribution(this, 'Distribution2', {
defaultBehavior: {
origin: s3Origin,
edgeLambdas: [
{
functionVersion: uriRedirector.currentVersion,
eventType: cloudfront.LambdaEdgeEventType.ORIGIN_REQUEST,
},
],
originRequestPolicy: defaultBehaviourOriginRequestPolicy,
viewerProtocolPolicy: cloudfront.ViewerProtocolPolicy.HTTPS_ONLY,
allowedMethods: cloudfront.AllowedMethods.ALLOW_ALL,
},
....
enter code here
const cfnDistribution = this.distribution.node
.defaultChild as cloudfront.CfnDistribution
cfnDistribution.overrideLogicalId(props.oldDistributionLogicalId)
You will create IAM policy in IAM and attach policy to user or role
By default AWS Lambda automatically create role you can attach policy to role
Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Lambda",
"Effect": "Allow",
"Action": [
"lambda:AddPermission",
"lambda:CreateFunction",
"lambda:DeleteFunction",
"lambda:GetFunction",
"lambda:GetFunctionConfiguration",
"lambda:ListTags",
"lambda:RemovePermission",
"lambda:TagResource",
"lambda:UntagResource",
"lambda:UpdateFunctionCode",
"lambda:UpdateFunctionConfiguration",
"lambda:GetLayerVersion"
],
"Resource": [
"arn:aws:lambda:us-east-1:xxxxxxxx:function:edge-lambda-stack-xxxxxxx-xxxxxxxx-xxxxxxx:*"
]
}
]
}

Unable to add trigger to AWS Lambda

I am trying to add SQS as a trigger to my Lambda function running in AWS-VPC but it throws error as :
An error occurred when creating the trigger: The provided execution role does not have permissions to call ReceiveMessage on SQS (Service: AWSLambda; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: d34b7525-5c69-4434-a015-112e8e74f447; Proxy: null)
Tried via adding AWSLambdaVPCAccessExecutionRole to the policy for
the role as well via IAM. But no luck!
I am unable to figure where I am making a mistake? Please help me out, if anyone had similar experience in past or knows how to resolve it. Thanks you in advance!
Please attach managed policy AWSLambdaSQSQueueExecutionRole in your lambda execution role.
If your lambda function is working with any other aws services, you can try creating custom role and add specific permissions.
In aws if any service want to access any another service you need those specific permission in role.
for more information on lambda permission please check Managed lambda permissions
You need to add the following actions to the IAM Role attached to your lambda:
sqs:ReceiveMessage
sqs:DeleteMessage
sqs:GetQueueAttributes
Otherwise your lambda will not be able to receive any message from the queue. DeleteMessage action allows to remove a message from queue once its successfully processed. As a resource set the ARN of your SQS queue. Policy should look like this:
{
"Action": [
"sqs:DeleteMessage",
"sqs:ReceiveMessage",
"sqs:GetQueueAttributes"
],
"Resource": "arn:aws:sqs:region:accountid:queuename",
"Effect": "Allow"
}
If you're looking for a managed policy, have a look at AWSLambdaSQSQueueExecutionRole.
Attach a policy for a lambda role you might have to change account_number to your account no if you need to invoke another lambda form this lambda
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:**eu-west-1**:**account_number:function**:*"
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents",
"logs:CreateLogStream",
"logs:CreateLogGroup"
],
"Resource": "*"
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"sqs:*"
],
"Resource": "*"
}
]
}

How to give a Fargate Task the right permissions to upload to S3

I want to upload to S3 from a Fargate task. Can this be achieved by only specifying a ExecutionRoleArn as opposed to specifying a both a ExecutionRoleArn and a TaskRoleArn?
If I specify a ExecutionRoleArn that has the following Permission Policies attached:
Custom policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::example_bucket/*"
}
]
}
AmazonECSTaskExecutionRolePolicy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
With the following trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com",
"lambda.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
Would this be sufficient to allow the task to upload to S3? Or do I need to define a TaskRoleArn?
The ExecutionRoleArn is used by the service to setup the task correctly, this includes pulling any images down from ECR.
The TaskRoleArn is used by the task to give it the permissions it needs to interact with other AWS Services (such as S3).
Technically both Arns could be the same, however I would suggest separating them to be different roles to avoid confusion over the permissions required for both of the scenarios the role is used for.
Additionally you should have the endpoint for ecs.amazonaws.com. In fact the full list of services depending on how you're using ECS are below (although most could be removed such as spot if you're not using spot, or autoscaling if you're not using autoscaling).
"ecs.amazonaws.com",
"ecs-tasks.amazonaws.com",
"spot.amazonaws.com",
"spotfleet.amazonaws.com",
"ecs.application-autoscaling.amazonaws.com",
"autoscaling.amazonaws.com"
In the case of Fargate, both IAM role pay different role
Execution Role
This is role is mandatory and you can not run the task without this role even if you add ExecuationRole policy in Task Role
To produce this error just set Execution role =None, you will not able to launch the task.
AWS Forums (Unable to create a new revision of Task Definition)
Task Role
This role is optional and you can add s3 related permission in this role,
Optional IAM role that tasks can use to make API requests to authorized AWS services.
Your police seems okay,
Create ecs_s3_upload_role
Add below policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::example_bucket/*"
}
]
}
Now Fargate Task will able to upload to S3 bucket.
Your policies don't include any s3 related permissions. Thus you should define your s3 permissions in a task role:
With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task.

Unable to trigger AWS Lambda by upload to AWS S3

I am trying to build a Kibana dashboard fed with twitter data collected via AWS Kinesis firehose where data passes into an S3 bucket which triggers a Lambda function which passes the data to AWS Elastic Search and then to Kibana. I am following this blog https://aws.amazon.com/blogs/big-data/building-a-near-real-time-discovery-platform-with-aws/
The data is loading into the S3 bucket correctly but it never arrives in Kibana, I believe this is because the Lambda function is not being triggered by events in S3 as I would have hoped (there are no invocations or logs). I think this is because I have not set permissions properly. The Lambda function can be invoked manually by the test event.
On the Lambda function page I chose an existing role which I called lambda_s3_exec_role which has the AWSLambdaExecute policy attached to it but I feel I'm missing something else more specific to S3. I have been unable to follow this line in the blog in the create lambda function section because I do not recognise those options:
"10. Choose lambda_s3_exec_role (if this value does not exist, choose Create new role S3 execution role)."
Can anyone help me create the appropriate role/policy for the Lambda function, or spot what the problem may be?
From view permissions on the Lambda function I currently have:
FUNCTION POLICY
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "****",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "****",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:::****"
}
}
}
]
}
EXECUTION ROLE
{
"roleName": "lambda_s3_exec_role",
"policies": [
{
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::*"
}
]
},
"name": "AWSLambdaExecute",
"id": "****",
"type": "managed",
"arn": "arn:aws:iam::aws:policy/AWSLambdaExecute"
}
]
}
The permissions you have listed look OK so I am going to try provide some steps that might help find the issue as it is difficult to understand specifically where your issue might be.
Does the execution role have the trust relationship with a trusted entity of lambda.amazonaws.com
Does your event prefix match the prefix in firehose. In the tutorial they are both twitter/raw-data/. If firehose is writing to a path that isn't the event prefix then the event won't be invoked.
Does the lambda trigger any errors when you manually invoke it
Does the lambda write to the logs when you manually invoke it
Test the lambda using dummy data (example data below)
CLI
aws lambda invoke \
--invocation-type RequestResponse \
--function-name helloworld \
--region region \
--log-type Tail \
--payload file://dummy_event.json \
--profile adminuser \
outputfile.txt
Example data
source
dummy_event.json
{
"Records":[
{
"eventVersion":"2.0",
"eventSource":"aws:s3",
"awsRegion":"us-west-2",
"eventTime":"1970-01-01T00:00:00.000Z",
"eventName":"ObjectCreated:Put",
"userIdentity":{
"principalId":"AIDAJDPLRKLG7UEXAMPLE"
},
"requestParameters":{
"sourceIPAddress":"127.0.0.1"
},
"responseElements":{
"x-amz-request-id":"C3D13FE58DE4C810",
"x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
},
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"testConfigRule",
"bucket":{
"name":"sourcebucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::sourcebucket"
},
"object":{
"key":"HappyFace.jpg",
"size":1024,
"eTag":"d41d8cd98f00b204e9800998ecf8427e",
"versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
}
}
}
]
}
Struggled with this for a long time and eventually realized that your rule that triggers the lambda cannot have exactly the same name as the lambda itself or it won't work.

Correct Kinesis stream principal

What principal should be used when specifying a Kinesis stream as a principal in AWS? Specifically I want to write an AWS::Lambda::Permission resource in CloudFormation: the lambda will be triggered by a Kinesis stream. Principal is required from the docs
None of these work for me:
kinesis.amazonaws.com
kinesisstream.amazonaws.com
kinesisstreams.amazonaws.com
kinesis-stream.amazonaws.com
I haven't found any documentation which gives a mapping of service to principal. I found this page, which shows the principal for Kinesis Analytics is kinesisanalytics.amazonaws.com.
I looked at the AWS service namespaces page, but it didn't have any information about principals. The namespace for streams appeared to just be kinesis, but that didn't work for me, as mentioned above.
The correct one is kinesisanalytics.amazonaws.com
Yet let me assume that you want to trigger lambda once a new record putted into kinesis, then all what you need is:
Create IAM Role with the following assume role policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow"
}
]
}
Create an IAM Role Policy "to be attached to the Role created in step 1":
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:{aws_region}:{account_id}:*:*",
"Effect": "Allow"
},
{
"Effect": "Allow",
"Action": [
"kinesis:GetRecords",
"kinesis:GetShardIterator",
"kinesis:DescribeStream",
"kinesis:ListStreams"
],
"Resource": "arn:aws:kinesis:{aws_region}:{account_id}:stream/{kinesis_stream_name}"
}]
}
Attach IAM Role from step 1 to the lambda function
Do lambda event source mapping "Add Trigger" with the following attribute:
batch_size = 100
starting position = "LATEST"
Enabled = True