How to configure authorization mechanism inline with boto3 - amazon-web-services

I am using boto3 in aws lambda to fecth object in S3 located in Frankfurt Region.
v4 is necessary. otherwise following error will return
"errorMessage": "An error occurred (InvalidRequest) when calling
the GetObject operation: The authorization mechanism you have
provided is not supported. Please use AWS4-HMAC-SHA256."
Realized ways to configure signature_version http://boto3.readthedocs.org/en/latest/guide/configuration.html
But since I am using AWS lambda, I do not have access to underlying configuration profiles
The code of my AWS lambda function
from __future__ import print_function
import boto3
def lambda_handler (event, context):
input_file_bucket = event["Records"][0]["s3"]["bucket"]["name"]
input_file_key = event["Records"][0]["s3"]["object"]["key"]
input_file_name = input_file_bucket+"/"+input_file_key
s3=boto3.resource("s3")
obj = s3.Object(bucket_name=input_file_bucket, key=input_file_key)
response = obj.get()
return event #echo first key valuesdf
Is that possible to configure signature_version within this code ? use Session for example. Or is there any workaround on this?

Instead of using the default session, try using custom session and Config from boto3.session
import boto3
import boto3.session
session = boto3.session.Session(region_name='eu-central-1')
s3client = session.client('s3', config= boto3.session.Config(signature_version='s3v4'))
s3client.get_object(Bucket='<Bkt-Name>', Key='S3-Object-Key')

I tried the session approach, but I had issues. This method worked better for me, your mileage may vary:
s3 = boto3.resource('s3', config=Config(signature_version='s3v4'))
You will need to import Config from botocore.client in order to make this work. See below for a functional method to test a bucket (list objects). This assumes you are running it from an environment where your authentication is managed, such as Amazon EC2 or Lambda with a IAM Role:
import boto3
from botocore.client import Config
from botocore.exceptions import ClientError
def test_bucket(bucket):
print 'testing bucket: ' + bucket
try:
s3 = boto3.resource('s3', config=Config(signature_version='s3v4'))
b = s3.Bucket(bucket)
objects = b.objects.all()
for obj in objects:
print obj.key
print 'bucket test SUCCESS'
except ClientError as e:
print 'Client Error'
print e
print 'bucket test FAIL'
To test it, simply call the method with a bucket name. Your role will have to grant proper permissions.

Using a resource worked for me.
from botocore.client import Config
import boto3
s3 = boto3.resource("s3", config=Config(signature_version="s3v4"))
return s3.meta.client.generate_presigned_url(
"get_object", Params={"Bucket": AIRFLOW_BUCKET, "Key": key}, ExpiresIn=expTime
)

Related

AWS Lambda /tmp python script import module error

I am trying to run a python script that is present in AWS Lambda /tmp directory. The scripts require some extra dependencies like boto3 etc to run the file. When AWS Lambda runs the file it gives out the following error:
ModuleNotFoundError: No module named 'boto3'
However when i run this file directly as a lambda function then it runs easily whithout any import errors.
The Lambda Code that is trying to execute the code present in /tmp directory :
import json
import os
import urllib.parse
import boto3
s3 = boto3.client('s3')
def lambda_handler(event, context):
records = [x for x in event.get('Records', []) if x.get('eventName') == 'ObjectCreated:Put']
sorted_events = sorted(records, key=lambda e: e.get('eventTime'))
latest_event = sorted_events[-1] if sorted_events else {}
info = latest_event.get('s3', {})
file_key = info.get('object', {}).get('key')
bucket_name = info.get('bucket', {}).get('name')
s3 = boto3.resource('s3')
BUCKET_NAME = bucket_name
keys = [file_key]
for KEY in keys:
local_file_name = '/tmp/'+KEY
s3.Bucket(BUCKET_NAME).download_file(KEY, local_file_name)
print("Running Incoming File !! ")
os.system('python ' + local_file_name)
The /tmp code that is trying to get some data from S3 using boto3 :
import sys
import boto3
import json
def main():
session = boto3.Session(
aws_access_key_id='##',
aws_secret_access_key='##',
region_name='##')
s3 = session.resource('s3')
# get a handle on the bucket that holds your file
bucket = s3.Bucket('##')
# get a handle on the object you want (i.e. your file)
obj = bucket.Object(key='8.json')
# get the object
response = obj.get()
# read the contents of the file
lines = response['Body'].read().decode()
data = json.loads(lines)
transactions = data['dataset']['fields']
print(str(len(transactions)))
return str(len(transactions))
main()
So boto3 is imported in both the codes . But its only successful when the lambda code is executing it . However /tmp code cant import boto3 .
What can be the reason and how can i resolve it ?
Executing another python process does not copy Lambda's PYTHONPATH by default:
os.system('python ' + local_file_name)
Rewrite like this:
os.system('PYTHONPATH=/var/runtime python ' + local_file_name)
In order to find out complete PYTHONPATH the current Lambda version is using, add the following to the first script (one executed by Lambda):
import sys
print(sys.path)

Lambda call S3 get public access block using boto3

I'm trying to verify if the public access block of my bucket mypublicbucketname is checked or not through Lambda function. For testing, I create a bucket and I have unchecked the public access block. So, I did this Lambda:
import sys
from pip._internal import main
main(['install', '-I', '-q', 'boto3', '--target', '/tmp/', '--no-cache-dir', '--disable-pip-version-check'])
sys.path.insert(0,'/tmp/')
import json
import boto3
import botocore
def lambda_handler(event, context):
# TODO implement
print(boto3.__version__)
print(botocore.__version__)
client = boto3.client('s3')
response = client.get_public_access_block(Bucket='mypublicbucketname')
print("response:>>",response)
I updated the latest version of boto3 and botocore.
1.16.40 #for boto3
1.19.40 #for botocore
Even if I uploaded them and the function seems correct I got this exception:
[ERROR] ClientError: An error occurred (NoSuchPublicAccessBlockConfiguration) when calling the GetPublicAccessBlock operation: The public access block configuration was not found
Someone can explain me why I have this error ?
For futur users. If you got the same problem with get_public_access_block(). Use this solution:
try:
response = client.get_public_access_block(Bucket='mypublicbucketname')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'NoSuchPublicAccessBlockConfiguration':
print('No Public Access')
else:
print("unexpected error: %s" % (e.response))
for put_public_access_block, it works fine.

Upload files to s3 bucket using python gives Access Denied

I created a Python script that should upload a file from my local ec2 to the s3 bucket
import boto3
s3 = boto3.resource('s3')
data = open('backupFile.txt', 'rb')
s3.Bucket('mlsd').put_object(Key='backupFile.txt', Body=data)
I went to AWS account details and got the credentials.
I executed aws configure to set credentials on my EC2.
Hear is the output of the credentials using aws configure list:
I went to .aws/credentials and pasted access_key_id, secret_access_key, and token
I ensured that the token is not expired.
When I ran the script, I got the following output:
Not sure what the problem is.
Boto3 detects your credentials in possible locations, as described here, so it should find your access_key_id and secret_access_key
Make sure the user whose access_key_id you use has the access to S3 bucket.
I tried this code example and it works:
import logging
import boto3
from botocore.exceptions import ClientError
def upload_file(file_name, bucket, object_name=None):
"""Upload a file to an S3 bucket
:param file_name: File to upload
:param bucket: Bucket to upload to
:param object_name: S3 object name. If not specified then file_name is used
:return: True if file was uploaded, else False
"""
# If S3 object_name was not specified, use file_name
if object_name is None:
object_name = file_name
# Upload the file
s3_client = boto3.client('s3')
try:
response = s3_client.upload_file(file_name, bucket, object_name)
except ClientError as e:
logging.error(e)
return False
return True

Generate Signed URL in S3 using boto3

In Boto, I used to generate a signed URL using the below function.
import boto
conn = boto.connect_s3()
bucket = conn.get_bucket(bucket_name, validate=True)
key = bucket.get_key(key)
signed_url = key.generate_url(expires_in=3600)
How do I do the exact same thing in boto3?
I searched through boto3 GitHub codebase but could not find a single reference to generate_url.
Has the function name changed?
From Generating Presigned URLs:
import boto3
import requests
from botocore import client
# Get the service client.
s3 = boto3.client('s3', config=client.Config(signature_version='s3v4'))
# Generate the URL to get 'key-name' from 'bucket-name'
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': 'bucket-name',
'Key': 'key-name'
},
ExpiresIn=3600 # one hour in seconds, increase if needed
)
# Use the URL to perform the GET operation. You can use any method you like
# to send the GET, but we will use requests here to keep things simple.
response = requests.get(url)
Function reference: generate_presigned_url()
I get error InvalidRequestThe authorization mechanism you have provided is not supported when trying to access the url generated in normal browser – Aseem Apr 30 '19 at 5:22
As there isn't much info i am assuming you are getting signature version issue, if not maybe it will help someone else ! :P
For this you can import Config from botocore:-
from botocore.client import Config
and then get the client using this config and providing signature version as 's3v4'
s3 = boto3.client('s3', config=Config(signature_version='s3v4'))

Google Cloud API: Forbidden to access Enabled API using Service Account key

I am having issues use Service Account P12 Key and getting HttpError 403.
However, I do not have this issue if I use Web OAuth using Client ID and Secret. However, I am creating as Service to Service application.
Google Cloud API JSON is enabled.
import os
from httplib2 import Http
from pprintpp import pprint
from oauth2client.client import SignedJwtAssertionCredentials
from apiclient.discovery import build
from googleapiclient.errors import HttpError
SITE_ROOT = \
os.path.dirname(os.path.realpath(__file__))
P12_FILE = \
"REDACTED-0123456789.p12"
P12_PATH = os.path.join(SITE_ROOT, P12_FILE)
pprint(P12_PATH)
SCOPE = \
'https://www.googleapis.com/auth/devstorage.read_only'
PROJECT_NAME = \
'mobileapptracking-insights'
BUCKET_NAME = \
'pubsite_prod_rev_0123456789'
CLIENT_EMAIL = \
'REDACTED-service#foo-bar-0123456789.iam.gserviceaccount.com'
private_key = None
with open(P12_PATH, "rb") as p12_fp:
private_key = p12_fp.read()
credentials = SignedJwtAssertionCredentials(
CLIENT_EMAIL,
private_key,
SCOPE)
http_auth = credentials.authorize(Http())
storage = build('storage', version='v1', http=http_auth)
request = storage.objects().list(bucket=BUCKET_NAME)
try:
response = request.execute()
except HttpError as error:
print("HttpError: %s" % str(error))
raise
except Exception as error:
print("%s: %s" % (error.__class__.__name__, str(error)))
raise
print(response)
Error message:
HttpError: <HttpError 403 when requesting https://www.googleapis.com/storage/v1/b/pubsite_prod_rev_0123456789/o?alt=json returned "Forbidden">
What do I need to do to resolve this issue?
Your code looks fine (I just pasted it, changed the appropriate constants, and successfully ran it).
I would double-check:
That your client email is the correct one for the p12 key
That the bucket you're listing is accessible to that service account
Some other things you could do to help you figure out where the problem is:
Verify that you can list the public uspto-pair bucket
import httplib2 and set httplib2.debuglevel = 1, and verify that the requests that are being made are the expected ones.
The issue was that I had not assigned access permissions to the service's 'client email' through the Google Play Developers Console > Settings > USER ACCOUNTS & RIGHTS