I am trying to an upload that I am retrieving from django forms to the amazon boto. But everytime I save it gets saved in first_part/second_part/third_part/amazon-sw/(required image) instead of getting saved in first_part/second_part/third_part.
I use the tinys3 library. I tried but found boto to be a little complex to use so used tinys3. Please do help me out.
access_key = aws_details.AWS_ACCESS_KEY_ID
secret_key = aws_details.AWS_SECRET_ACCESS_KEY
bucket_name = "s3-ap-southeast-1.amazonaws.com/first_part/second_part/third_part/"
myfile = request.FILES['image'] # getting the image from html view
fs = FileSystemStorage()
fs.save('demo_blah_blah.png', myfile) # saving the image
conn = tinys3.Connection(access_key, secret_key, tls=True, endpoint='s3-ap-southeast-1.amazonaws.com') # connecting to the bucket
f = open('demo_blah_blah.png', 'rb')
conn.upload('test_pic10000.png', f, bucket_name) # uploading to boto using tinys3 library
Related
I was trying to create some products in ecommerce project in django and i had the data file ready and just wanted to loop throw the data and save to the database with Product.objects.create(image='', ...) but i couldnt upload the images from local directory to database!
I tried these ways:
1
with open('IMAGE_PATH', 'rb') as f:
image = f.read()
Product.objects.create(image=image)
2
image = open('IMAGE_PATH', 'rb')
Product.objects.create(image=image)
3
module_dir = dir_path = os.path.dirname(os.path.realpath(__file__))
for p in products:
file_path = os.path.join(module_dir, p['image'])
Product.objects.create()
product.image.save(
file_path,
File(open(file_path, 'rb'))
)
product.save()
none worked for me.
After some searching, I got the answer.
the code to use would be like this:
from django.core.files import File
for p in products:
product = Product.objects.create()
FILE_PATH = p['image']
local_file = open(f'./APP_NAME/{FILE_PATH}', "rb")
djangofile = File(local_file)
product.image.save('FILE_NAME.jpg', djangofile)
local_file.close()
from django.core.files import File
import urllib
result = urllib.urlretrieve(image_url) # image_url is a URL to an image
model_instance.photo.save(
os.path.basename(self.url),
File(open(result[0], 'rb'))
)
self.save()
Got the answer from here
I have recently completed this tutorial from AWS on how to create a thumbnail generator using lambda and S3: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-tutorial.html . Basically, I'm uploading an image file to my '-source' bucket and then lambda generates a thumbnail and uploads it to my '-thumbnail' bucket.
Everything works as expected. However, I wanted to use s3 object URL in the '-thumbnail' bucket so that I can load the image from there for a small app I'm building. The issue I'm having is that the URL doesn't display the image in the browser but instead downloads the file. This causes my app to error out.
I did some research and learned that I had to change the content-type to image/jpeg and then also made the object public using ACL. This works for all of the other buckets I have except the one that has the thumbnail. I have recreated this bucket several times. I even copied the settings from my existing buckets. I have compared settings to all the other buckets and they appear to be the same.
I wanted to reach out and see if anyone has ran into this type of issue before. Or if there is something I might be missing.
Here is the code I'm using to generate the thumbnail.
import boto3
from boto3.dynamodb.conditions import Key, Attr
import os
import sys
import uuid
import urllib.parse
from urllib.parse import unquote_plus
from PIL.Image import core as _imaging
import PIL.Image
s3 = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(os.environ['DB_TABLE_NAME'])
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
recordId = key
tmpkey = key.replace('/', '')
download_path = '/tmp/{}{}'.format(uuid.uuid4(), tmpkey)
upload_path = '/tmp/resized-{}'.format(tmpkey)
try:
s3.download_file(bucket, key, download_path)
resize_image(download_path, upload_path)
bucket = bucket.replace('source', 'thumbnail')
s3.upload_file(upload_path, bucket, key)
print(f"Thumbnail created and uploaded to {bucket} successfully.")
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
else:
s3.put_object_acl(ACL='public-read',
Bucket=bucket,
Key=key)
#create image url to add to dynamo
url = f"https://postreader-thumbnail.s3.us-west-2.amazonaws.com/{key}"
print(url)
#create record id to update the appropriate record in the 'Posts' table
recordId = key.replace('.jpeg', '')
#add the image_url column along with the image url as the value
table.update_item(
Key={'id':recordId},
UpdateExpression=
"SET #statusAtt = :statusValue, #img_urlAtt = :img_urlValue",
ExpressionAttributeValues=
{':statusValue': 'UPDATED', ':img_urlValue': url},
ExpressionAttributeNames=
{'#statusAtt': 'status', '#img_urlAtt': 'img_url'},
)
def resize_image(image_path, resized_path):
with PIL.Image.open(image_path) as image:
#change to standard/hard-coded size
image.thumbnail(tuple(x / 2 for x in image.size))
image.save(resized_path)
This could happen if the Content-Type of the file you're uploading is binary/octet-stream , you can modify your script like below to provide custom content-type while uploading.
s3.upload_file(upload_path, bucket, key, ExtraArgs={'ContentType':
"image/jpeg"})
After more troubleshooting the issue was apparently related to the bucket's name. I created a new bucket with a different name than it had previously. After doing so I was able to upload and share images without issue.
I edited my code so that the lambda uploads to the new bucket name and I am able to share the image via URL without downloading.
I have a Flask web app that is running on Google AppEngine. The app has a form that my user will use to supply image links. I want to download the image data from the link and then upload it to a Google Cloud Storage bucket.
What I have found so far on Google's documentation tells me to use the 'cloudstorage' client library which I have installed and imported as 'gcs'.
found here: https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/read-write-to-cloud-storage
I think I am not handling the image data correctly through requests. I get a 200 code back from the Cloud Storage upload call but there is no object when I look for it in the console. Here is where I try to retrieve the image and then upload it:
img_resp = requests.get(image_link, stream=True)
objectName = '/myBucket/testObject.jpg'
gcs_file = gcs.open(objectName,
'w',
content_type='image/jpeg')
gcs_file.write(img_resp)
gcs_file.close()
edit:
Here is my updated code to reflect an answer's suggestion:
image_url = urlopen(url)
content_type = image_url.headers['Content-Type']
img_bytes = image_url.read()
image_url.close()
filename = bucketName + objectName
options = {'x-goog-acl': 'public-read',
'Cache-Control': 'private, max-age=0, no-transform'}
with gcs.open(filename,
'w',
content_type=content_type,
options=options) as f:
f.write(img_bytes)
f.close()
However, I am still getting a 201 response on the POST (create file) call and then a 200 on the PUT call but the object never appears in the console.
Try this:
from google.appengine.api import images
import urllib2
image = urllib2.urlopen(image_url)
img_resp = image.read()
image.close()
objectName = '/myBucket/testObject.jpg'
options = {'x-goog-acl': 'public-read',
'Cache-Control': 'private, max-age=0, no-transform'}
with gcs.open(objectName,
'w',
content_type='image/jpeg',
options=options) as f:
f.write(img_resp)
f.close()
And, why restrict them to just entering a url. Why not allow them to upload a local image:
if isinstance(image_or_url, basestring): # should be url
if not image_or_url.startswith('http'):
image_or_url = ''.join([ 'http://', image_or_url])
image = urllib2.urlopen(image_url)
content_type = image.headers['Content-Type']
img_resp = image.read()
image.close()
else:
img_resp = image_or_url.read()
content_type = image_or_url.content_type
If you are running on the development server, the file will be uploaded into your local datastore. Check it at:
http://localhost:<your admin port number>/datastore?kind=__GsFileInfo__
and
http://localhost:<your admin port number>/datastore?kind=__BlobInfo__
I am trying to get an image from my S3 bucket and return it for use in my API gateway.
Permissions are set correctly.
import boto3
s3 = boto3.resource('s3')
def handler(event, context):
image = s3.meta.client.download_file('mybucket', 'email-sig/1.png', '/tmp/1.png')
return image
however I am getting a null return and cannot seem to figure out how to get the image. Is this the correct approach, and why is it not returning my image.
You are downloading the image file which is in /tmp/1.png. What you are returning is the return value of download_file() which seems to be returning null. What data type does your API gateway expect?
I have images in s3 bucket and have to get or return that images,
First get the image and encoded to base64 format and the return that base64 format.
From that base64 format, I have just decoded base64 and Got the image and returned from API.
At the end of the code, it returns base64, go to the browser and search 'base64 to image',
and paste that returning base64 format you will get your s3 bucket image.
The following code will sure help someone.
import boto3
import base64
from boto3 import client
def lambda_handler(event, context):
user_download_img ='Name Of Your Image in S3'
print('user_download_img ==> ',user_download_img)
s3 = boto3.resource('s3')
bucket = s3.Bucket(u'Your-Bucket-Name')
obj = bucket.Object(key=user_download_img) #pass your image Name to key
response = obj.get() #get Response
img = response[u'Body'].read() # Read the respone, you can also print it.
print(type(img)) # Just getting type.
myObj = [base64.b64encode(img)] # Encoded the image to base64
print(type(myObj)) # Printing the values
print(myObj[0]) # get the base64 format of the image
print('type(myObj[0]) ================>',type(myObj[0]))
return_json = str(myObj[0]) # Assing to return_json variable to return.
print('return_json ========================>',return_json)
return_json = return_json.replace("b'","") # repplace this 'b'' is must to get absoulate image.
encoded_image = return_json.replace("'","")
return {
'status': 'True',
'statusCode': 200,
'message': 'Downloaded profile image',
'encoded_image':encoded_image # returning base64 of your image which in s3 bucket.
}
Now go to API gateway and create your API.
I'm using Flask / Heroku and the Boto library. I want the uploaded file to be saved in my S3...
#app.route("/step3/", methods = ["GET", "POST"])
def step3():
if request.method == "GET":
return render_template("step3.html")
else:
file = request.files['resume']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
k = Key(S3_BUCKET)
k.key = "TEST"
k.set_contents_from_filename(file)
return redirect(url_for("preview"))
but the following gives me the following...
TypeError: coercing to Unicode: need string or buffer, FileStorage found
To write it you need to change your file as a String, that means you need to read it after it has been open.