I have a Flask web app that is running on Google AppEngine. The app has a form that my user will use to supply image links. I want to download the image data from the link and then upload it to a Google Cloud Storage bucket.
What I have found so far on Google's documentation tells me to use the 'cloudstorage' client library which I have installed and imported as 'gcs'.
found here: https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/read-write-to-cloud-storage
I think I am not handling the image data correctly through requests. I get a 200 code back from the Cloud Storage upload call but there is no object when I look for it in the console. Here is where I try to retrieve the image and then upload it:
img_resp = requests.get(image_link, stream=True)
objectName = '/myBucket/testObject.jpg'
gcs_file = gcs.open(objectName,
'w',
content_type='image/jpeg')
gcs_file.write(img_resp)
gcs_file.close()
edit:
Here is my updated code to reflect an answer's suggestion:
image_url = urlopen(url)
content_type = image_url.headers['Content-Type']
img_bytes = image_url.read()
image_url.close()
filename = bucketName + objectName
options = {'x-goog-acl': 'public-read',
'Cache-Control': 'private, max-age=0, no-transform'}
with gcs.open(filename,
'w',
content_type=content_type,
options=options) as f:
f.write(img_bytes)
f.close()
However, I am still getting a 201 response on the POST (create file) call and then a 200 on the PUT call but the object never appears in the console.
Try this:
from google.appengine.api import images
import urllib2
image = urllib2.urlopen(image_url)
img_resp = image.read()
image.close()
objectName = '/myBucket/testObject.jpg'
options = {'x-goog-acl': 'public-read',
'Cache-Control': 'private, max-age=0, no-transform'}
with gcs.open(objectName,
'w',
content_type='image/jpeg',
options=options) as f:
f.write(img_resp)
f.close()
And, why restrict them to just entering a url. Why not allow them to upload a local image:
if isinstance(image_or_url, basestring): # should be url
if not image_or_url.startswith('http'):
image_or_url = ''.join([ 'http://', image_or_url])
image = urllib2.urlopen(image_url)
content_type = image.headers['Content-Type']
img_resp = image.read()
image.close()
else:
img_resp = image_or_url.read()
content_type = image_or_url.content_type
If you are running on the development server, the file will be uploaded into your local datastore. Check it at:
http://localhost:<your admin port number>/datastore?kind=__GsFileInfo__
and
http://localhost:<your admin port number>/datastore?kind=__BlobInfo__
Related
I have recently completed this tutorial from AWS on how to create a thumbnail generator using lambda and S3: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-tutorial.html . Basically, I'm uploading an image file to my '-source' bucket and then lambda generates a thumbnail and uploads it to my '-thumbnail' bucket.
Everything works as expected. However, I wanted to use s3 object URL in the '-thumbnail' bucket so that I can load the image from there for a small app I'm building. The issue I'm having is that the URL doesn't display the image in the browser but instead downloads the file. This causes my app to error out.
I did some research and learned that I had to change the content-type to image/jpeg and then also made the object public using ACL. This works for all of the other buckets I have except the one that has the thumbnail. I have recreated this bucket several times. I even copied the settings from my existing buckets. I have compared settings to all the other buckets and they appear to be the same.
I wanted to reach out and see if anyone has ran into this type of issue before. Or if there is something I might be missing.
Here is the code I'm using to generate the thumbnail.
import boto3
from boto3.dynamodb.conditions import Key, Attr
import os
import sys
import uuid
import urllib.parse
from urllib.parse import unquote_plus
from PIL.Image import core as _imaging
import PIL.Image
s3 = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(os.environ['DB_TABLE_NAME'])
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
recordId = key
tmpkey = key.replace('/', '')
download_path = '/tmp/{}{}'.format(uuid.uuid4(), tmpkey)
upload_path = '/tmp/resized-{}'.format(tmpkey)
try:
s3.download_file(bucket, key, download_path)
resize_image(download_path, upload_path)
bucket = bucket.replace('source', 'thumbnail')
s3.upload_file(upload_path, bucket, key)
print(f"Thumbnail created and uploaded to {bucket} successfully.")
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
else:
s3.put_object_acl(ACL='public-read',
Bucket=bucket,
Key=key)
#create image url to add to dynamo
url = f"https://postreader-thumbnail.s3.us-west-2.amazonaws.com/{key}"
print(url)
#create record id to update the appropriate record in the 'Posts' table
recordId = key.replace('.jpeg', '')
#add the image_url column along with the image url as the value
table.update_item(
Key={'id':recordId},
UpdateExpression=
"SET #statusAtt = :statusValue, #img_urlAtt = :img_urlValue",
ExpressionAttributeValues=
{':statusValue': 'UPDATED', ':img_urlValue': url},
ExpressionAttributeNames=
{'#statusAtt': 'status', '#img_urlAtt': 'img_url'},
)
def resize_image(image_path, resized_path):
with PIL.Image.open(image_path) as image:
#change to standard/hard-coded size
image.thumbnail(tuple(x / 2 for x in image.size))
image.save(resized_path)
This could happen if the Content-Type of the file you're uploading is binary/octet-stream , you can modify your script like below to provide custom content-type while uploading.
s3.upload_file(upload_path, bucket, key, ExtraArgs={'ContentType':
"image/jpeg"})
After more troubleshooting the issue was apparently related to the bucket's name. I created a new bucket with a different name than it had previously. After doing so I was able to upload and share images without issue.
I edited my code so that the lambda uploads to the new bucket name and I am able to share the image via URL without downloading.
I have a django 3.x project where I can upload multiple files and associated form data through the admin pages to a model called Document. However, I need to upload a large number of files, so I wrote a small python script to automate that process.
I am having one problem with the script. I can't seem to set the name of the file as it is set when uploaded through the admin page.
Here is the script...I had a few problems getting the csrf token working correctly, so there may be some redundant code for that.
import requests
# Set up the urls to login to the admin pages and access the correct add page
URL1='http://localhost:8000/admin/'
URL2='http://localhost:8000/admin/login/?next=/admin/'
URL3 = 'http://localhost:8090/admin/memorabilia/document/add/'
USER='admin'
PASSWORD='xxxxxxxxxxxxx'
client = requests.session()
# Retrieve the CSRF token first
client.get(URL1) # sets the cookie
csrftoken = client.cookies['csrftoken']
print("csrftoken1=%s" % csrftoken)
login_data = dict(username=USER, password=PASSWORD, csrfmiddlewaretoken=csrftoken)
r = client.post(URL2, data=login_data, headers={"Referer": "foo"})
r = client.get(URL3)
csrftoken = client.cookies['csrftoken']
print("csrftoken2=%s" % csrftoken)
cookies = dict(csrftoken= csrftoken)
headers = {'X-CSRFToken': csrftoken}
file_path = "/media/mark/ea00fd8e-4330-4d76-81d8-8fe7dde2cb95/2017/Memorable/20047/Still Images/Photos/20047_Phillips_Photo_052_002.jpg"
data = {
"csrfmiddlewaretoken": csrftoken,
"documentType_id": '1',
"rotation" : '0',
"TBD": '350',
"Title": "A test title",
"Period": "353",
"Source Folder": '258',
"Decade": "168",
"Location": "352",
"Photo Type": "354",
}
file_data = None
with open(file_path ,'rb') as fr:
file_data = fr.read()
# storage_file_name is the name of the FileField in the Document model.
#response_1 = requests.post(url=URL3, data=data, files={'storage_file_name': file_data,}, cookies=cookies)
response_2 = client.post(url=URL3, data=data, files={'storage_file_name': file_data, 'name': "20047_Phillips_Photo_052_002.jpg"}, cookies=cookies,)
When I upload using the admin page, the name of the file is "20047_Phillips_Photo_052_002.jpg", as it should be (i.e. storage_file_name.name = 20047_Phillips_Photo_052_002.jpg).
When I run the script using files={'storage_file_name': file_data,} (see response_1 at the bottom of the script), the files uploads correctly except that the name of the file is "storage_file_name" and not "20047_Phillips_Photo_052_002.jpg" (i.e. storage_file_name.name = "storage_file_name").
When I upload using files={'storage_file_name': file_data, 'name': "20047_Phillips_Photo_052_002.jpg"} the name of the file is still "storage_file_name" (i.e. storage_file_name.name = "storage_file_name").
I looked in the request.FILES object when uploading a file through the admin page, and the _name field for each object is the name of the file being uploaded. The documentation for the django File object says it has a field called name.
What am I missing to get my script to upload a file the same way as the admin page does? By that I mean, the name of the file is not "storage_file_name".
When I change the last response= line to
response = client.post(url=URL3, data=metadata, files= {'storage_file_name': open(file_path ,'rb'),}, cookies=cookies, headers=headers)
the file upload works and the file name is correctly displayed.
I am trying to an upload that I am retrieving from django forms to the amazon boto. But everytime I save it gets saved in first_part/second_part/third_part/amazon-sw/(required image) instead of getting saved in first_part/second_part/third_part.
I use the tinys3 library. I tried but found boto to be a little complex to use so used tinys3. Please do help me out.
access_key = aws_details.AWS_ACCESS_KEY_ID
secret_key = aws_details.AWS_SECRET_ACCESS_KEY
bucket_name = "s3-ap-southeast-1.amazonaws.com/first_part/second_part/third_part/"
myfile = request.FILES['image'] # getting the image from html view
fs = FileSystemStorage()
fs.save('demo_blah_blah.png', myfile) # saving the image
conn = tinys3.Connection(access_key, secret_key, tls=True, endpoint='s3-ap-southeast-1.amazonaws.com') # connecting to the bucket
f = open('demo_blah_blah.png', 'rb')
conn.upload('test_pic10000.png', f, bucket_name) # uploading to boto using tinys3 library
I have this view, which takes a user_id and image_id. When the user cliks the link, check if there is an image. If there is, then I would like the file to force download automatically.
template:
<a class="downloadBtn" :href="website + '/download-image/'+ user_id+'/'+ image_id +'/'">Download</a>
Before I was developing it in my local machine, and this code was working.
#api_view(['GET'])
#permission_classes([AllowAny])
def download_image(request, user_id=None, image_id=None):
try:
ui = UserImage.objects.get(user=user_id, image=image_id)
content_type = mimetypes.guess_type(ui.image.url)
wrapper = FileWrapper(open(str(ui.image.file)))
response = HttpResponse(wrapper, content_type=content_type)
response['Content-Disposition'] = 'attachment; filename="image.jpeg'
return response
except UserImage.DoesNotExist:
...
But now I am using aws s3 for my static and media files. I am using django-storages and boto3. How can I force download the image in the browser?
#api_view(['GET'])
#permission_classes([AllowAny])
def download_image(request, user_id=None, image_id=None):
try:
ui = UserImage.objects.get(user=user_id, image=image_id)
url = ui.image.url
...
... FORCE DOWNLOAD THE IMAGE
...
except UserImage.DoesNotExist:
...
... ERROR, NO IMAGE AVAILABLE
...
You can just return a HttpResponse with the image itself.
return HttpResponse(instance.image, content_type="image/jpeg")
This will return the image's byte stream. The Content-type header is to show the images in platforms like Postman.
I'm developing an application using Django and angularJS.
One of the major thing that worker server (coded in python, flask) does is downloading videos from s3 (which are uploaded by users) and uploading the videos to youtube.
Is there way to "delete a youtube video in python"?.
There is no such a code example written in python.
Does anyone know how to do this simply, like the code example below?
This is sample code for uploading video. I referred this code and implemented uploading feature.
def get_authenticated_service(args):
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE,
scope=YOUTUBE_UPLOAD_SCOPE,
message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
http=credentials.authorize(httplib2.Http()))
def initialize_upload(youtube, options):
tags = None
if options.keywords:
tags = options.keywords.split(",")
body=dict(
snippet=dict(
title=options.title,
description=options.description,
tags=tags,
categoryId=options.category
),
status=dict(
privacyStatus=options.privacyStatus
)
)
# Call the API's videos.insert method to create and upload the video.
insert_request = youtube.videos().insert(
part=",".join(body.keys()),
body=body,
media_body=MediaFileUpload(options.file, chunksize=-1, resumable=True)
)
resumable_upload(insert_request)
Make a file called: delete_video.py
Usage: python delete_video.py --id=MY_VID_ID
#!/usr/bin/python
import httplib
import httplib2
import os
import random
import sys
import time
from apiclient.discovery import build
from apiclient.errors import HttpError
from apiclient.http import MediaFileUpload
from oauth2client.client import flow_from_clientsecrets
from oauth2client.file import Storage
from oauth2client.tools import argparser, run_flow
# Explicitly tell the underlying HTTP transport library not to retry, since
# we are handling retry logic ourselves.
httplib2.RETRIES = 1
# Maximum number of times to retry before giving up.
MAX_RETRIES = 10
# Always retry when these exceptions are raised.
RETRIABLE_EXCEPTIONS = (httplib2.HttpLib2Error, IOError, httplib.NotConnected,
httplib.IncompleteRead, httplib.ImproperConnectionState,
httplib.CannotSendRequest, httplib.CannotSendHeader,
httplib.ResponseNotReady, httplib.BadStatusLine)
# Always retry when an apiclient.errors.HttpError with one of these status
# codes is raised.
RETRIABLE_STATUS_CODES = [500, 502, 503, 504]
# The CLIENT_SECRETS_FILE variable specifies the name of a file that contains
# the OAuth 2.0 information for this application, including its client_id and
# client_secret. You can acquire an OAuth 2.0 client ID and client secret from
# the Google Developers Console at
# https://console.developers.google.com/.
# Please ensure that you have enabled the YouTube Data API for your project.
# For more information about using OAuth2 to access the YouTube Data API, see:
# https://developers.google.com/youtube/v3/guides/authentication
# For more information about the client_secrets.json file format, see:
# https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
CLIENT_SECRETS_FILE = "client_secrets.json"
# This OAuth 2.0 access scope allows an application to upload files to the
# authenticated user's YouTube channel, but doesn't allow other types of access.
YOUTUBE_DELETE_SCOPE = "https://www.googleapis.com/auth/youtube"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
# This variable defines a message to display if the CLIENT_SECRETS_FILE is
# missing.
MISSING_CLIENT_SECRETS_MESSAGE = """
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the Developers Console
https://console.developers.google.com/
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
""" % os.path.abspath(os.path.join(os.path.dirname(__file__),
CLIENT_SECRETS_FILE))
VALID_PRIVACY_STATUSES = ("public", "private", "unlisted")
def get_authenticated_service(args):
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE,
scope=YOUTUBE_DELETE_SCOPE,
message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
http=credentials.authorize(httplib2.Http()))
if __name__ == '__main__':
argparser.add_argument("--id", required=True, help="Video youtube ID")
args = argparser.parse_args()
if not args.id:
exit("Please specify a youtube ID using the --id= parameter.")
youtube = get_authenticated_service(args)
try:
resp = youtube.videos().delete(id=args.id, onBehalfOfContentOwner=None).execute()
except HttpError, e:
print "An HTTP error %d occurred:\n%s" % (e.resp.status, e.content)
Assuming that you are using the python client library I found this in the documentation.
delete(id=*, onBehalfOfContentOwner=None) Deletes a YouTube video.
Args: id: string, The id parameter specifies the YouTube video ID
for the resource that is being deleted. In a video resource, the id
property specifies the video's ID. (required)
onBehalfOfContentOwner: string, Note: This parameter is intended
exclusively for YouTube content partners.