Lambda get image from s3 - amazon-web-services

I am trying to get an image from my S3 bucket and return it for use in my API gateway.
Permissions are set correctly.
import boto3
s3 = boto3.resource('s3')
def handler(event, context):
image = s3.meta.client.download_file('mybucket', 'email-sig/1.png', '/tmp/1.png')
return image
however I am getting a null return and cannot seem to figure out how to get the image. Is this the correct approach, and why is it not returning my image.

You are downloading the image file which is in /tmp/1.png. What you are returning is the return value of download_file() which seems to be returning null. What data type does your API gateway expect?

I have images in s3 bucket and have to get or return that images,
First get the image and encoded to base64 format and the return that base64 format.
From that base64 format, I have just decoded base64 and Got the image and returned from API.
At the end of the code, it returns base64, go to the browser and search 'base64 to image',
and paste that returning base64 format you will get your s3 bucket image.
The following code will sure help someone.
import boto3
import base64
from boto3 import client
def lambda_handler(event, context):
user_download_img ='Name Of Your Image in S3'
print('user_download_img ==> ',user_download_img)
s3 = boto3.resource('s3')
bucket = s3.Bucket(u'Your-Bucket-Name')
obj = bucket.Object(key=user_download_img) #pass your image Name to key
response = obj.get() #get Response
img = response[u'Body'].read() # Read the respone, you can also print it.
print(type(img)) # Just getting type.
myObj = [base64.b64encode(img)] # Encoded the image to base64
print(type(myObj)) # Printing the values
print(myObj[0]) # get the base64 format of the image
print('type(myObj[0]) ================>',type(myObj[0]))
return_json = str(myObj[0]) # Assing to return_json variable to return.
print('return_json ========================>',return_json)
return_json = return_json.replace("b'","") # repplace this 'b'' is must to get absoulate image.
encoded_image = return_json.replace("'","")
return {
'status': 'True',
'statusCode': 200,
'message': 'Downloaded profile image',
'encoded_image':encoded_image # returning base64 of your image which in s3 bucket.
}
Now go to API gateway and create your API.

Related

Share JPEG file stored on S3 via URL instead of downloading

I have recently completed this tutorial from AWS on how to create a thumbnail generator using lambda and S3: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-tutorial.html . Basically, I'm uploading an image file to my '-source' bucket and then lambda generates a thumbnail and uploads it to my '-thumbnail' bucket.
Everything works as expected. However, I wanted to use s3 object URL in the '-thumbnail' bucket so that I can load the image from there for a small app I'm building. The issue I'm having is that the URL doesn't display the image in the browser but instead downloads the file. This causes my app to error out.
I did some research and learned that I had to change the content-type to image/jpeg and then also made the object public using ACL. This works for all of the other buckets I have except the one that has the thumbnail. I have recreated this bucket several times. I even copied the settings from my existing buckets. I have compared settings to all the other buckets and they appear to be the same.
I wanted to reach out and see if anyone has ran into this type of issue before. Or if there is something I might be missing.
Here is the code I'm using to generate the thumbnail.
import boto3
from boto3.dynamodb.conditions import Key, Attr
import os
import sys
import uuid
import urllib.parse
from urllib.parse import unquote_plus
from PIL.Image import core as _imaging
import PIL.Image
s3 = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(os.environ['DB_TABLE_NAME'])
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
recordId = key
tmpkey = key.replace('/', '')
download_path = '/tmp/{}{}'.format(uuid.uuid4(), tmpkey)
upload_path = '/tmp/resized-{}'.format(tmpkey)
try:
s3.download_file(bucket, key, download_path)
resize_image(download_path, upload_path)
bucket = bucket.replace('source', 'thumbnail')
s3.upload_file(upload_path, bucket, key)
print(f"Thumbnail created and uploaded to {bucket} successfully.")
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
else:
s3.put_object_acl(ACL='public-read',
Bucket=bucket,
Key=key)
#create image url to add to dynamo
url = f"https://postreader-thumbnail.s3.us-west-2.amazonaws.com/{key}"
print(url)
#create record id to update the appropriate record in the 'Posts' table
recordId = key.replace('.jpeg', '')
#add the image_url column along with the image url as the value
table.update_item(
Key={'id':recordId},
UpdateExpression=
"SET #statusAtt = :statusValue, #img_urlAtt = :img_urlValue",
ExpressionAttributeValues=
{':statusValue': 'UPDATED', ':img_urlValue': url},
ExpressionAttributeNames=
{'#statusAtt': 'status', '#img_urlAtt': 'img_url'},
)
def resize_image(image_path, resized_path):
with PIL.Image.open(image_path) as image:
#change to standard/hard-coded size
image.thumbnail(tuple(x / 2 for x in image.size))
image.save(resized_path)
This could happen if the Content-Type of the file you're uploading is binary/octet-stream , you can modify your script like below to provide custom content-type while uploading.
s3.upload_file(upload_path, bucket, key, ExtraArgs={'ContentType':
"image/jpeg"})
After more troubleshooting the issue was apparently related to the bucket's name. I created a new bucket with a different name than it had previously. After doing so I was able to upload and share images without issue.
I edited my code so that the lambda uploads to the new bucket name and I am able to share the image via URL without downloading.

AWS Lambda : read image from S3 upload event

I am using Lambda to read image files when they are uploaded to S3 through a S3 trigger. The following is my code:
import json
import numpy as np
import face_recognition as fr
def lambda_handler(event, context):
for record in event['Records']:
bucket=record['s3']['bucket']['name']
key = record['s3']['object']['key']
print(bucket,key)
This correctly prints the bucket name and key. However how do I read the image so that I can run face-recognition module on the image. Can i generate the arn for each uploaded image and use it to read the same?
You can read the image from S3 directly:
s3 = boto3.client('s3')
resp = s3.get_object(Bucket=bucket, Key=key)
image_bytes = resp['Body'].read()

Uploading an image to a boto bucket

I am trying to an upload that I am retrieving from django forms to the amazon boto. But everytime I save it gets saved in first_part/second_part/third_part/amazon-sw/(required image) instead of getting saved in first_part/second_part/third_part.
I use the tinys3 library. I tried but found boto to be a little complex to use so used tinys3. Please do help me out.
access_key = aws_details.AWS_ACCESS_KEY_ID
secret_key = aws_details.AWS_SECRET_ACCESS_KEY
bucket_name = "s3-ap-southeast-1.amazonaws.com/first_part/second_part/third_part/"
myfile = request.FILES['image'] # getting the image from html view
fs = FileSystemStorage()
fs.save('demo_blah_blah.png', myfile) # saving the image
conn = tinys3.Connection(access_key, secret_key, tls=True, endpoint='s3-ap-southeast-1.amazonaws.com') # connecting to the bucket
f = open('demo_blah_blah.png', 'rb')
conn.upload('test_pic10000.png', f, bucket_name) # uploading to boto using tinys3 library

How to force download an image on click with django and aws s3

I have this view, which takes a user_id and image_id. When the user cliks the link, check if there is an image. If there is, then I would like the file to force download automatically.
template:
<a class="downloadBtn" :href="website + '/download-image/'+ user_id+'/'+ image_id +'/'">Download</a>
Before I was developing it in my local machine, and this code was working.
#api_view(['GET'])
#permission_classes([AllowAny])
def download_image(request, user_id=None, image_id=None):
try:
ui = UserImage.objects.get(user=user_id, image=image_id)
content_type = mimetypes.guess_type(ui.image.url)
wrapper = FileWrapper(open(str(ui.image.file)))
response = HttpResponse(wrapper, content_type=content_type)
response['Content-Disposition'] = 'attachment; filename="image.jpeg'
return response
except UserImage.DoesNotExist:
...
But now I am using aws s3 for my static and media files. I am using django-storages and boto3. How can I force download the image in the browser?
#api_view(['GET'])
#permission_classes([AllowAny])
def download_image(request, user_id=None, image_id=None):
try:
ui = UserImage.objects.get(user=user_id, image=image_id)
url = ui.image.url
...
... FORCE DOWNLOAD THE IMAGE
...
except UserImage.DoesNotExist:
...
... ERROR, NO IMAGE AVAILABLE
...
You can just return a HttpResponse with the image itself.
return HttpResponse(instance.image, content_type="image/jpeg")
This will return the image's byte stream. The Content-type header is to show the images in platforms like Postman.

Download image data then upload to Google Cloud Storage

I have a Flask web app that is running on Google AppEngine. The app has a form that my user will use to supply image links. I want to download the image data from the link and then upload it to a Google Cloud Storage bucket.
What I have found so far on Google's documentation tells me to use the 'cloudstorage' client library which I have installed and imported as 'gcs'.
found here: https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/read-write-to-cloud-storage
I think I am not handling the image data correctly through requests. I get a 200 code back from the Cloud Storage upload call but there is no object when I look for it in the console. Here is where I try to retrieve the image and then upload it:
img_resp = requests.get(image_link, stream=True)
objectName = '/myBucket/testObject.jpg'
gcs_file = gcs.open(objectName,
'w',
content_type='image/jpeg')
gcs_file.write(img_resp)
gcs_file.close()
edit:
Here is my updated code to reflect an answer's suggestion:
image_url = urlopen(url)
content_type = image_url.headers['Content-Type']
img_bytes = image_url.read()
image_url.close()
filename = bucketName + objectName
options = {'x-goog-acl': 'public-read',
'Cache-Control': 'private, max-age=0, no-transform'}
with gcs.open(filename,
'w',
content_type=content_type,
options=options) as f:
f.write(img_bytes)
f.close()
However, I am still getting a 201 response on the POST (create file) call and then a 200 on the PUT call but the object never appears in the console.
Try this:
from google.appengine.api import images
import urllib2
image = urllib2.urlopen(image_url)
img_resp = image.read()
image.close()
objectName = '/myBucket/testObject.jpg'
options = {'x-goog-acl': 'public-read',
'Cache-Control': 'private, max-age=0, no-transform'}
with gcs.open(objectName,
'w',
content_type='image/jpeg',
options=options) as f:
f.write(img_resp)
f.close()
And, why restrict them to just entering a url. Why not allow them to upload a local image:
if isinstance(image_or_url, basestring): # should be url
if not image_or_url.startswith('http'):
image_or_url = ''.join([ 'http://', image_or_url])
image = urllib2.urlopen(image_url)
content_type = image.headers['Content-Type']
img_resp = image.read()
image.close()
else:
img_resp = image_or_url.read()
content_type = image_or_url.content_type
If you are running on the development server, the file will be uploaded into your local datastore. Check it at:
http://localhost:<your admin port number>/datastore?kind=__GsFileInfo__
and
http://localhost:<your admin port number>/datastore?kind=__BlobInfo__