I am now trying to read CSV file(only column name) from S3 bucket using Lambda function. I have created an S3 trigger within Lambda. Here is the sample code;
import json
import boto3
import csv
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# TODO implement
bucket = event['Records'][0]['s3']['bucket']['name']
csv_file = event['Records'][0]['s3']['object']['key']
response = s3_client.get_object(Bucket=bucket, Key=csv_file)
lines = response['Body'].read().decode('utf-8').split()
results = []
for row in csv.DictReader(lines):
results.append(row.name())
print(results)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Whenever I try to upload a new file, i get this error;
[ERROR] ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 10, in lambda_handler
response = s3_client.get_object(Bucket=bucket, Key=csv_file)
File "/var/runtime/botocore/client.py", line 386, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 705, in _make_api_call
raise error_class(parsed_response, operation_name
I added a specific role and provided necessary permissions to my S3 bucket.
Snippet of my S3 bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*",
}
}
I have provided necessary permissions to my s3 bucket. But still getting this error:
Response
{
"errorMessage": "An error occurred (AccessDenied) when calling the GetObject operation: Access Denied",
"errorType": "ClientError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 19, in lambda_handler\n response = s3_client.get_object(Bucket=bucket, Key=csv_file)\n",
" File \"/var/runtime/botocore/client.py\", line 386, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/runtime/botocore/client.py\", line 705, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
]
}
Can anyone tell me why am i getting this error?
Assuming that your S3 bucket is the one in-charge of invoking the lambda function. This will require two parties to have permissions.
1). The bucket needs to have a policy that allows it to trigger the function.
2). The lambda that will pull the CSV files from the bucket needs policy too man. In order to achieve the second part, you might want to consider pre-built policy templates available in SAM templates, this will not only make your policy definition more readable but also limits the actions that your lambda can perform on your buckets. The first sample below showcases how to grant S3 CRUD permissions
S3CsvReactor:
Type: "AWS::Serverless::Function"
Name: "csv-process-function"
Properties:
CodeUri: csv-processor-function/
Handler: app.execute
Timeout: 30 # Seconds
Runtime: Python 3.8
MemorySize: 512
Policies:
- S3CrudPolicy:
BucketName: "s3-containing-your-csv"
This example below showcases read-only implementation
S3CsvReactor:
Type: "AWS::Serverless::Function"
Name: "csv-process-function"
Properties:
CodeUri: csv-processor-function/
Handler: app.execute
Timeout: 30 # Seconds
Runtime: Python 3.8
MemorySize: 512
Policies:
- S3ReadPolicy:
BucketName: "s3-containing-your-csv"
This example below showcases write-only implementation
S3CsvReactor:
Type: "AWS::Serverless::Function"
Name: "csv-process-function"
Properties:
CodeUri: csv-processor-function/
Handler: app.execute
Timeout: 30 # Seconds
Runtime: Python 3.8
MemorySize: 512
Policies:
- S3WritePolicy:
BucketName: "s3-containing-your-csv"
Make sure the role can perform the PutObject and GetObject IAM actions on the bucket in the IAM resource specified in the IAM policy. Wrap your logic in a "try/except" block as well as a good practice to catch your errors, you might be surprised that the nature of the error from S3 that is propagating occurs earlier than expected. Furthermore, in the Lambda console select this function then click on the "Monitor" tab to be redirected to the CloudWatch Logs console, then you can read more details about the error especially if you are returning exceptions early.
Related
I'm using the Serverless framework to create a Lambda function that, when triggered by an S3 upload (uploading test.vcf to s3://trigger-test/uploads/), downloads that uploaded file from S3 to EFS (specifically to the /mnt/efs/vcfs/ folder). I'm pretty new to EFS and followed AWS documentation for setting up the EFS access point, but when I deploy this application and upload a test file to trigger the Lambda function, it fails to download the file and gives this error in the CloudWatch logs:
[ERROR] FileNotFoundError: [Errno 2] No such file or directory: '/mnt/efs/vcfs/test.vcf.A0bA45dC'
Traceback (most recent call last):
File "/var/task/handler.py", line 21, in download_files_to_efs
result = s3.download_file('trigger-test', key, efs_loci)
File "/var/runtime/boto3/s3/inject.py", line 170, in download_file
return transfer.download_file(
File "/var/runtime/boto3/s3/transfer.py", line 307, in download_file
future.result()
File "/var/runtime/s3transfer/futures.py", line 106, in result
return self._coordinator.result()
File "/var/runtime/s3transfer/futures.py", line 265, in result
raise self._exception
File "/var/runtime/s3transfer/tasks.py", line 126, in __call__
return self._execute_main(kwargs)
File "/var/runtime/s3transfer/tasks.py", line 150, in _execute_main
return_value = self._main(**kwargs)
File "/var/runtime/s3transfer/download.py", line 571, in _main
fileobj.seek(offset)
File "/var/runtime/s3transfer/utils.py", line 367, in seek
self._open_if_needed()
File "/var/runtime/s3transfer/utils.py", line 350, in _open_if_needed
self._fileobj = self._open_function(self._filename, self._mode)
File "/var/runtime/s3transfer/utils.py", line 261, in open
return open(filename, mode)
My hunch is that this has to do with the local mount path specified in the Lambda function versus the Root directory path in the Details portion of the EFS access point configuration. Ultimately, I want the test.vcf file I upload to S3 to be downloaded to the EFS folder: /mnt/efs/vcfs/.
Relevant files:
serverless.yml:
service: LambdaEFS-trigger-test
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
stage: dev
region: us-west-2
vpc:
securityGroupIds:
- sg-XXXXXXXX
- sg-XXXXXXXX
- sg-XXXXXXXX
subnetIds:
- subnet-XXXXXXXXXX
functions:
cfnPipelineTrigger:
handler: handler.download_files_to_efs
description: Lambda to download S3 file to EFS folder.
events:
- s3:
bucket: trigger-test
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .vcf
existing: true
fileSystemConfig:
localMountPath: /mnt/efs
arn: arn:aws:elasticfilesystem:us-west-2:XXXXXXXXXX:access-point/fsap-XXXXXXX
iamRoleStatements:
- Effect: Allow
Action:
- s3:ListBucket
Resource:
- arn:aws:s3:::trigger-test
- Effect: Allow
Action:
- s3:GetObject
- s3:GetObjectVersion
Resource:
- arn:aws:s3:::trigger-test/uploads/*
- Effect: Allow
Action:
- elasticfilesystem:ClientMount
- elasticfilesystem:ClientWrite
- elasticfilesystem:ClientRootAccess
Resource:
- arn:aws:elasticfilesystem:us-west-2:XXXXXXXXXX:file-system/fs-XXXXXX
plugins:
- serverless-iam-roles-per-function
package:
individually: true
exclude:
- '**/*'
include:
- handler.py
handler.py:
import json
import boto3
s3 = boto3.client('s3', region_name = 'us-west-2')
def download_files_to_efs(event, context):
"""
Locates the S3 file name (i.e. S3 object "key" value) the initiated the Lambda call, then downloads the file
into the locally attached EFS drive at the target location.
:param: event | S3 event record
:return: dict
"""
print(event)
key = event.get('Records')[0].get('s3').get('object').get('key') # bucket: trigger-test, key: uploads/test.vcf
efs_loci = f"/mnt/efs/vcfs/{key.split('/')[-1]}" # '/mnt/efs/vcfs/test.vcf
print("key: %s, efs_loci: %s" % (key, efs_loci))
result = s3.download_file('trigger-test', key, efs_loci)
if result:
print('Download Success...')
else:
print('Download failed...')
return { 'status_code': 200 }
EFS Access Point details:
Details
Root directory path: /vcfs
POSIX
USER ID: 1000
Group ID: 1000
Root directory creation permissions
Owner User ID: 1000
Owner Group ID: 1000
POSIX permissions to apply to the root directory path: 777
Your local path is localMountPath: /mnt/efs. So in your code you should be using only this path (not /mnt/efs/vcfs):
efs_loci = f"/mnt/efs/{key.split('/')[-1]}" # '/mnt/efs/test.vcf
I am trying to call a lambda function which will push some messages into the s3 bucket.But every time i am calling the lambda function i am getting the below error
ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Below is my lambda code
import json
import boto3
def lambda_handler(event, context):
s3 = boto3.client("s3")
#data = json.loads(event["Records"][0]["body"])
data = event["Records"][0]["body"]
s3.put_object(Bucket="sqsmybucket",Key="data.json", Body=json.dumps(data))
#print(event)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
I am using a user account which also has the role to access the S3
I have checked the s3 bucket permission and all public access are open for it
But i am repeatedly getting below error message in cloudwatch log
2020-06-05T23:48:20.920+05:30
[ERROR] ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 9, in lambda_handler
s3.put_object(Bucket="sqsmybucket",Key="data.json", Body=json.dumps(data))
File "/var/runtime/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 626, in _make_api_call
raise error_class(parsed_response, operation_name)
Please help i am really clueless about the situation.Thanks in advance.
Please make sure the role attached to the lambda function has the s3:PutObject permission.
For example, the least privilege/permission needed is
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<bucket-name>/*"
}
]
}
Notice the /* at the end of the resource string. The reason why /* is needed is because according to the doc, the PutObject action has an object resource type. Here is the definition of the object resource type. Basically, * is matching all possible S3 object keys, and the stuff to the left of / is limiting its scope down to a single S3 bucket.
I had the same issue, apparently, you don't have to just use the ARN of the bucket, but also include the "/*" at the end of it.
It's important to always use the Least Privileged pattern when granting permissions.
In case I'm not the only one in this:
MyLambda:
Type: AWS::Serverless::Function
Properties:
Policies:
Statement:
- Effect: Allow
Action:
- s3:PutObject
Resource: !GetAtt MyBucket.Arn
^ won't work. The last sentence needs to be changed to:
Resource: !Sub "${MyBucket.Arn}/*"
I've prepared a simple lambda function in AWS to terminate long running EMR clusters after a certain threshold is reached. This code snippet is tested locally and is working perfectly fine. Now I pushed it into a lambda, took care of the library dependencies, so that's also fine. This lambda is triggered from a CloudWatch rule, which is a simple cron schedule. I'm using an existing IAM rule which has these 7 policies attached to it.
SecretsManagerReadWrite
AmazonSQSFullAccess
AmazonS3FullAccess
CloudWatchFullAccess
AWSGlueServiceRole
AmazonSESFullAccess
AWSLambdaRole
I've configured the lambda to be inside the same vpc and security group as that of the emr(s). Still I'm getting this error consistently:
An error occurred (AccessDeniedException) when calling the ListClusters operation: User: arn:aws:sts::xyz:assumed-role/dev-lambda-role/terminate_inactive_dev_emr_clusters is not authorized to perform: elasticmapreduce:ListClusters on resource: *: ClientError
Traceback (most recent call last):
File "/var/task/terminate_dev_emr.py", line 24, in terminator
ClusterStates=['STARTING', 'BOOTSTRAPPING', 'RUNNING', 'WAITING']
File "/var/runtime/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the ListClusters operation: User: arn:aws:sts::xyz:assumed-role/dev-lambda-role/terminate_inactive_dev_emr_clusters is not authorized to perform: elasticmapreduce:ListClusters on resource: *
My lambda function looks something like this:
import pytz
import boto3
from datetime import datetime, timedelta
def terminator(event, context):
''' cluster lifetime limit in hours '''
LIMIT = 7
TIMEZONE = 'Asia/Kolkata'
AWS_REGION = 'eu-west-1'
print('Start cluster check')
emr = boto3.client('emr', region_name=AWS_REGION)
local_tz = pytz.timezone(TIMEZONE)
today = local_tz.localize(datetime.today(), is_dst=None)
lifetimelimit = today - timedelta(hours=LIMIT)
clusters = emr.list_clusters(
CreatedBefore=lifetimelimit,
ClusterStates=['STARTING', 'BOOTSTRAPPING', 'RUNNING', 'WAITING']
)
if clusters['Clusters'] is not None:
for cluster in clusters['Clusters']:
description = emr.describe_cluster(ClusterId=cluster['Id'])
if(len(description['Cluster']['Tags']) == 1
and description['Cluster']['Tags'][0]['Key'] == 'dev.ephemeral'):
print('Terminating Cluster: [{id}] with name [{name}]. It was active since: [{time}]'.format(id=cluster['Id'], name=cluster['Name'], time=cluster['Status']['Timeline']['CreationDateTime'].strftime('%Y-%m-%d %H:%M:%S')))
emr.terminate_job_flows(JobFlowIds=[cluster['Id']])
print('cluster check done')
return
Any help is appreciated.
As error message indicates, lambda does not have permissions to call ListClusters on EMR. As you are working with EMR clusters and would also like to terminate the clusters, you should give lambda function proper IAM role which is having that capability to do that. Create a new IAM policy from AWS console (say EMRFullAccess). here is how it looks like
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "elasticmapreduce:*",
"Resource": "*"
}
]
}
After creating policy, create a new role from AWS console with lambda as service and attach newly created policy above. After that, attach this role to your lambda function. That should solve issue :-)
I need to scan a dynamodb database but I keep getting this error:
"errorMessage": "An error occurred (AccessDeniedException) when
calling the Scan operation: User:
arn:aws:sts::747857903140:assumed-role/test_role/TestFunction is not
authorized to perform: dynamodb:Scan on resource:
arn:aws:dynamodb:us-east-1:747857903140:table/HelpBot"
This is my Lambda code (index.py):
import json
import boto3
client = boto3.resource('dynamodb')
table = client.Table('HelpBot')
def handler(event, context):
table.scan()
return {
"statusCode": 200,
"body": json.dumps('Hello from Lambda!')
}
This is my SAM template (template.yml):
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
MyFunction:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.handler
Runtime: python3.6
Policies:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:Scan
Resource: arn:aws:dynamodb:us-east-1:747857903140:table/HelpBot
Does you lambda role have the DynamoDB policies applied?
Go to
IAM Go to policies
Choose the DynamoDB policy (try full access and then go back and restrict your permissions)
From Policy Actions - Select Attach Attach it to the role that is
used by your Lambda
Try configuring the boto client to use your IAM role directly in the lambda function.
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-api.html
import json
import boto3
client = boto3.resource(
'dynamodb',
aws_access_key_id = ACCESS_KEY,
aws_secret_access_key = SECERT_KEY
)
table = client.Table('HelpBot')
def handler(event, context):
table.scan()
return {
"statusCode": 200,
"body": json.dumps('Hello from Lambda!')
}
Make sure you are not applying the default role (autogenerated) to your lambda, you need to create a rol with lambda basic execution permissions and attach this when you are creating your lambda or in the configuration pane, in the left choose permissions and change it.
i am new to serverless framework and i want to get an instance's status, so i used boto3 describe-instance-status() but i keep getting error that i am not authorized to perform this kind of operation althought i have administrator access to all aws services; please help, do i need to change, or add something to be recognized
here is my code :
import json
import boto3
import logging
import sys
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
from botocore.exceptions import ClientError
def instance_status(event, context):
"""Take an instance Id and return its status"""
#print "ttot"
body = {}
status_code = 200
client = boto3.client('ec2')
response = client.describe_instance_status(InstanceIds=['i-070ad071'])
return response
and here is my serverless.yml file
service: ec2
provider:
name: aws
runtime: python2.7
timeout: 30
memorySize: 128
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- "ec2:DescribeInstanceStatus"
Resource: "*"
functions:
instance_status:
handler: handler.instance_status
description: Status ec2 instances
events:
- http:
path: ''
method: get
and here is the error message i am getting:
"errorType": "ClientError", "errorMessage": "An error occurred
(UnauthorizedOperation) when calling the DescribeInstanceStatus
operation: You are not authorized to perform this operation."
...i have administrator access to all aws services...
Take note that the Lambda function is NOT running under your user account. You're supposed to define its role and permissions in your YAML.
In the provider section in your serverless.yaml, add the following:
iamRoleStatements:
- Effect: Allow
Action:
- ec2:DescribeInstanceStatus
Resource: <insert your resource here>
Reference: https://serverless.com/framework/docs/providers/aws/guide/iam/
You are not authorized to perform this operation
This means you have no permission to perform this action client.describe_instance_status.
There some ways to make your function can get right permission:
Use IAM Role: Create IAM Role with permission accroding to your requirement. Then assign this IAM role for lambda function in the setting page. So your lambda will automatic get rotate key to perform actions.
Create AccessKey/SecretKey with permission accroding to your requirement. Setting in yaml file, in your lambda function, set boto3 to accquire these access/secretKey, then perform action.
Read more from this http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html