Generate Signed URL in S3 using boto3 - amazon-web-services

In Boto, I used to generate a signed URL using the below function.
import boto
conn = boto.connect_s3()
bucket = conn.get_bucket(bucket_name, validate=True)
key = bucket.get_key(key)
signed_url = key.generate_url(expires_in=3600)
How do I do the exact same thing in boto3?
I searched through boto3 GitHub codebase but could not find a single reference to generate_url.
Has the function name changed?

From Generating Presigned URLs:
import boto3
import requests
from botocore import client
# Get the service client.
s3 = boto3.client('s3', config=client.Config(signature_version='s3v4'))
# Generate the URL to get 'key-name' from 'bucket-name'
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': 'bucket-name',
'Key': 'key-name'
},
ExpiresIn=3600 # one hour in seconds, increase if needed
)
# Use the URL to perform the GET operation. You can use any method you like
# to send the GET, but we will use requests here to keep things simple.
response = requests.get(url)
Function reference: generate_presigned_url()

I get error InvalidRequestThe authorization mechanism you have provided is not supported when trying to access the url generated in normal browser – Aseem Apr 30 '19 at 5:22
As there isn't much info i am assuming you are getting signature version issue, if not maybe it will help someone else ! :P
For this you can import Config from botocore:-
from botocore.client import Config
and then get the client using this config and providing signature version as 's3v4'
s3 = boto3.client('s3', config=Config(signature_version='s3v4'))

Related

Convertapi in AWS lambda?

I need to deploy convertapi on an AWS Lambda.
If I try import convertapi with python, it doesn't work because I need to import it.
In AWS, we use local folder or ARN to deploy libraries.
Is there an available ARN for convertapi like in https://github.com/keithrozario/Klayers/blob/master/deployments/python3.7/arns/eu-west-3.csv ?
If not, which folder should I copy/paste in my lambda to be able to do import convertapi ?
This is an example in Python without using ConvertAPI library.
`requests` library is required to run this example.
It can be installed using
> pip install requests
or if you are using Python 3:
> pip3 install requests
'''
import requests
import os.path
import sys
file_path = './test.docx'
secret = 'Your secret can be found at https://www.convertapi.com/a'
if not os.path.isfile(file_path):
sys.exit('File not found: ' + file_path)
url = 'https://v2.convertapi.com/convert/docx/to/pdf?secret=' + secret
files = {'file': open(file_path, 'rb')}
headers = {'Accept': 'application/octet-stream'}
response = requests.post(url, files=files, headers=headers)
if response.status_code != 200:
sys.exit(response.text)
output_file = open('result.pdf', 'wb')
output_file.write(response.content)
output_file.close

How to serve image from gcs using python 2.7 standard app engine?

The following code is almost verbatim copy of the sample code from Google to serve a file from Google Cloud Storage via Python 2.7 App Engine Standard Environment. When serving locally with command:
dev_appserver.py --default_gcs_bucket_name darianhickman-201423.appspot.com
import cloudstorage as gcs
import webapp2
class LogoPage(webapp2.RequestHandler):
def get(self):
bucket_name = "darianhickman-201423.appspot.com"
self.response.headers['Content-Type'] = 'image/jpeg'
self.response.headers['Message'] = "LogoPage"
gcs_file = gcs.open("/"+ bucket_name +'/logo.jpg')
contents = gcs_file.read()
gcs_file.close()
self.response.body.(contents)
app = webapp2.WSGIApplication([ ('/logo.jpg', LogoPage),
('/logo2.jpg', LogoPage)],
debug=True)
The empty body message I see on the console is:
NotFoundError: Expect status [200] from Google Storage. But got status 404.
Path: '/darianhickman-201423.appspot.com/logo.jpg'.
Request headers: None.
Response headers: {'date': 'Sun, 30 Dec 2018 18:54:54 GMT', 'connection': 'close', 'server': 'Development/2.0'}.
Body: ''.
Extra info: None.
Again this is almost identical to read logic documented at
https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/read-write-to-cloud-storage
If you serve it locally using dev_appserver.py, it runs a local emulation of Cloud Storage and does not connect to the actual Google Cloud Storage.
Try writing a file and then reading it. You’ll see that it will succeed.
Here is a sample:
import os
import cloudstorage as gcs
from google.appengine.api import app_identity
import webapp2
class MainPage(webapp2.RequestHandler):
def get(self):
bucket_name = os.environ.get('BUCKET_NAME',app_identity.get_default_gcs_bucket_name())
self.response.headers['Content-Type'] = 'text/plain'
filename = "/" + bucket_name + "/testfile"
#Create file
gcs_file = gcs.open(filename,
'w',
content_type='text/plain')
gcs_file.write('Hello world\n')
gcs_file.close()
#Read file and display content
gcs_file = gcs.open(filename)
contents = gcs_file.read()
gcs_file.close()
self.response.write(contents)
app = webapp2.WSGIApplication(
[('/', MainPage)], debug=True)
Run it with dev_appserver.py --default_gcs_bucket_name a-local-bucket .
If you deploy your application on Google App Engine then it will work (assuming you have a file called logo.jpg uploaded) because it connects to Google Cloud Storage. I tested it with minor changes:
import os
import cloudstorage as gcs
from google.appengine.api import app_identity
import webapp2
class LogoPage(webapp2.RequestHandler):
def get(self):
bucket_name = os.environ.get('BUCKET_NAME',app_identity.get_default_gcs_bucket_name())
#or you can use bucket_name = "<your-bucket-name>"
self.response.headers['Content-Type'] = 'image/jpeg'
self.response.headers['Message'] = "LogoPage"
gcs_file = gcs.open("/"+ bucket_name +'/logo.jpg')
contents = gcs_file.read()
gcs_file.close()
self.response.write(contents)
app = webapp2.WSGIApplication(
[('/', LogoPage)], debug=True)
Also, It's worth mentioning that the documentation for Using the client library with the development app server seems to be outdated, it states that:
There is no local emulation of Cloud Storage, all requests to read and
write files must be sent over the Internet to an actual Cloud Storage
bucket.
The team responsible for the documentation has already been informed about this issue.

How to call an external API webservice from Python in Amazon Lambda?

I am pretty new to AWS Lambda i have a python code which has a post method for making an external API call,
import requests
import json
url = "http://sample/project"
headers = {
'content-type': "application/json"
}
r = request.post(url,headers=headers)
I tried putting it into a AWS Lamda call i tried like this below but it didn't get worked out
import requests
import json
url = "http://sample/project"
headers = {
'content-type': "application/json"
}
def lambda_handler(event, context):
response = requests.request("POST", url, headers=headers)
return response
But i am not getting in any response if i am running from local machine i am getting the output.Please help me how can i make a post call from AWS Lamda

Google Cloud API: Forbidden to access Enabled API using Service Account key

I am having issues use Service Account P12 Key and getting HttpError 403.
However, I do not have this issue if I use Web OAuth using Client ID and Secret. However, I am creating as Service to Service application.
Google Cloud API JSON is enabled.
import os
from httplib2 import Http
from pprintpp import pprint
from oauth2client.client import SignedJwtAssertionCredentials
from apiclient.discovery import build
from googleapiclient.errors import HttpError
SITE_ROOT = \
os.path.dirname(os.path.realpath(__file__))
P12_FILE = \
"REDACTED-0123456789.p12"
P12_PATH = os.path.join(SITE_ROOT, P12_FILE)
pprint(P12_PATH)
SCOPE = \
'https://www.googleapis.com/auth/devstorage.read_only'
PROJECT_NAME = \
'mobileapptracking-insights'
BUCKET_NAME = \
'pubsite_prod_rev_0123456789'
CLIENT_EMAIL = \
'REDACTED-service#foo-bar-0123456789.iam.gserviceaccount.com'
private_key = None
with open(P12_PATH, "rb") as p12_fp:
private_key = p12_fp.read()
credentials = SignedJwtAssertionCredentials(
CLIENT_EMAIL,
private_key,
SCOPE)
http_auth = credentials.authorize(Http())
storage = build('storage', version='v1', http=http_auth)
request = storage.objects().list(bucket=BUCKET_NAME)
try:
response = request.execute()
except HttpError as error:
print("HttpError: %s" % str(error))
raise
except Exception as error:
print("%s: %s" % (error.__class__.__name__, str(error)))
raise
print(response)
Error message:
HttpError: <HttpError 403 when requesting https://www.googleapis.com/storage/v1/b/pubsite_prod_rev_0123456789/o?alt=json returned "Forbidden">
What do I need to do to resolve this issue?
Your code looks fine (I just pasted it, changed the appropriate constants, and successfully ran it).
I would double-check:
That your client email is the correct one for the p12 key
That the bucket you're listing is accessible to that service account
Some other things you could do to help you figure out where the problem is:
Verify that you can list the public uspto-pair bucket
import httplib2 and set httplib2.debuglevel = 1, and verify that the requests that are being made are the expected ones.
The issue was that I had not assigned access permissions to the service's 'client email' through the Google Play Developers Console > Settings > USER ACCOUNTS & RIGHTS

How to configure authorization mechanism inline with boto3

I am using boto3 in aws lambda to fecth object in S3 located in Frankfurt Region.
v4 is necessary. otherwise following error will return
"errorMessage": "An error occurred (InvalidRequest) when calling
the GetObject operation: The authorization mechanism you have
provided is not supported. Please use AWS4-HMAC-SHA256."
Realized ways to configure signature_version http://boto3.readthedocs.org/en/latest/guide/configuration.html
But since I am using AWS lambda, I do not have access to underlying configuration profiles
The code of my AWS lambda function
from __future__ import print_function
import boto3
def lambda_handler (event, context):
input_file_bucket = event["Records"][0]["s3"]["bucket"]["name"]
input_file_key = event["Records"][0]["s3"]["object"]["key"]
input_file_name = input_file_bucket+"/"+input_file_key
s3=boto3.resource("s3")
obj = s3.Object(bucket_name=input_file_bucket, key=input_file_key)
response = obj.get()
return event #echo first key valuesdf
Is that possible to configure signature_version within this code ? use Session for example. Or is there any workaround on this?
Instead of using the default session, try using custom session and Config from boto3.session
import boto3
import boto3.session
session = boto3.session.Session(region_name='eu-central-1')
s3client = session.client('s3', config= boto3.session.Config(signature_version='s3v4'))
s3client.get_object(Bucket='<Bkt-Name>', Key='S3-Object-Key')
I tried the session approach, but I had issues. This method worked better for me, your mileage may vary:
s3 = boto3.resource('s3', config=Config(signature_version='s3v4'))
You will need to import Config from botocore.client in order to make this work. See below for a functional method to test a bucket (list objects). This assumes you are running it from an environment where your authentication is managed, such as Amazon EC2 or Lambda with a IAM Role:
import boto3
from botocore.client import Config
from botocore.exceptions import ClientError
def test_bucket(bucket):
print 'testing bucket: ' + bucket
try:
s3 = boto3.resource('s3', config=Config(signature_version='s3v4'))
b = s3.Bucket(bucket)
objects = b.objects.all()
for obj in objects:
print obj.key
print 'bucket test SUCCESS'
except ClientError as e:
print 'Client Error'
print e
print 'bucket test FAIL'
To test it, simply call the method with a bucket name. Your role will have to grant proper permissions.
Using a resource worked for me.
from botocore.client import Config
import boto3
s3 = boto3.resource("s3", config=Config(signature_version="s3v4"))
return s3.meta.client.generate_presigned_url(
"get_object", Params={"Bucket": AIRFLOW_BUCKET, "Key": key}, ExpiresIn=expTime
)