I'm developing a web application in Django, and one of its features is adding new articles with a photo.
My Article model class contains models.FileField. I use S3BotoStorage as DEFAULT_FILE_STORAGE (Amazon S3).
Users can add a photo to an article in two ways:
1) Upload a photo from disk (by using input type=file)
2) Paste URL to existing photo online
If users uses option 1), everything works. I get uploaded file in view from request.FILES dictionary and assing it to FileField in Article object. The photo is uploaded to S3.
But when user pastes URL to a photo, the first thing I have to do in view is to download this photo. I do it by using function:
def downloadPhotoFromURL(url):
try:
img = urllib.urlretrieve(url)[0]
return img
except Exception:
return None
Then I save this image to FileField in model, so my whole code responsible for downloading image, and uploading it to S3 is like this:
articleImg = downloadPhotoFromURL(url)
f = File(open(articleImg), 'rb')
newArticle.image.save('tmp', f)
In this situation, I cannot upload it to S3 and after 2 minutes I'm receiving BotoServerError: 400 Bad Request. Unfortunately I don't have any other information why this request is bad. Any idea what can be going wrong? By the time I saved an image to model, I had saved a model, so model exists when I try to upload photo to S3.
Related
I'm using Imagekit to resize pictures on a website i'm developing with Django, I've used Imagekit before without any issues but I have a strange problem.
I am using S3 for Media/Static files. When I upload an image in admin and refresh the template serving the images i get ValueError at / I/O operation on closed file error page. Then if I refresh the page it loads fine without a problem.
I am using an ImageSpec field to resize images and using {{ image.thumbnail.url }} in my template.
I am very much a beginner with Django so looking for some direction. Why will it work on a page reload and not when initially uploaded?
Here are the relevant fields in my model.
class Media(models.Model):
image = models.ImageField(upload_to="media")
thumbnail = ImageSpecField([Adjust(sharpness=1.1), ResizeToFill(800, 650)],
source='image', format='JPEG', options={'quality': 100})
I am assuming it's a problem with S3. It's trying to access the resized image but the Imagekit thumbnail isn't created on fileupload, only when the page is loaded?
Ok, after some searching, this is the fix (and all now working) to solve the issue with storages which is the source of the problem.
from storages.backends.s3boto3 import S3Boto3Storage
from tempfile import SpooledTemporaryFile
class CustomS3Boto3Storage(S3Boto3Storage):
location = 'media'
file_overwrite = False
"""
This is our custom version of S3Boto3Storage that fixes a bug in
boto3 where the passed in file is closed upon upload.
From:
https://github.com/matthewwithanm/django-imagekit/issues/391#issuecomment-275367006
https://github.com/boto/boto3/issues/929
https://github.com/matthewwithanm/django-imagekit/issues/391
"""
def _save(self, name, content):
"""
We create a clone of the content file as when this is passed to
boto3 it wrongly closes the file upon upload where as the storage
backend expects it to still be open
"""
# Seek our content back to the start
content.seek(0, os.SEEK_SET)
# Create a temporary file that will write to disk after a specified
# size. This file will be automatically deleted when closed by
# boto3 or after exiting the `with` statement if the boto3 is fixed
with SpooledTemporaryFile() as content_autoclose:
# Write our original content into our copy that will be closed by boto3
content_autoclose.write(content.read())
# Upload the object which will auto close the
# content_autoclose instance
return super(CustomS3Boto3Storage, self)._save(name, content_autoclose)```
I cant seem to understand how is it possible that for GCS the authenticated URL shows a different image then the public URL ?
Im uploading the images via a python django script
def upload_to_cloud(blob_name, file_obj):
file_type = imghdr.what(file_obj)
blob_name = str(blob_name) + '.' + file_type # concatenate string to create 'file_name.format'
stats = storage.Blob(bucket=bucket, name=blob_name).exists(client) # check if logo with the same reg.nr exists
if stats is True: # if exists then delete before uploading new logo
storage.Blob(bucket=bucket, name=blob_name).delete()
blob = bucket.blob(blob_name)
blob.upload_from_file(file_obj=file_obj, content_type=f'image/{file_type}')
path = blob.public_url
return path
class CompanyProfile(SuccessMessageMixin, UpdateView): # TODO why company logo differs from the one in ads_list?
model = Company
form_class = CompanyProfileCreationForm
def form_valid(self, form):
"""
Check if user uploaded a new logo. If yes
then upload the new logo to google cloud
"""
if 'logo' in self.request.FILES:
blob_name = self.request.user.company.reg_nr # get company registration number
file_obj = self.request.FILES['logo'] # store uploaded file in variable
form.instance.logo_url = upload_to_cloud(blob_name, file_obj) # update company.logo_url with path to uploaded file
company = Company.objects.get(pk=self.request.user.company.pk)
company.save()
return super().form_valid(form)
else:
return super().form_valid(form)
Any ideas on what Im doing wrong and how its even possible? The file that I actually uploaded is the one under authenticated url. The file thats under public url is a file that I uploaded for a different blob
EDIT
Im adding screenshot of the different images because after some time the images appear to be the same as they should be. Some people are confused by this and comment that the images are the same after all
Public URL
Authenticated URL
Note that caching issue is ruled out since I sent the public URL to my friend and he also saw that the image is the HTML text although the image in the authenticated URL (the correct image) was a light bulb. He also noted that the URL preview in fb messenger showed the light bulb image but when he actually opened the URL the HTML text image appeared
This problem persists in case a file is uploaded with the same blob name. This happens regardless if its overwritten by gcs or if I previously execute blob delete function and then create a new file with the same name as the deleted blob.
In general the same object will be served by storage.googleapis.com and storage.cloud.google.com.
The only exception is if there is some caching (either in your browser, in a proxy, with Cloud CDN or in GCS). If you read the object via storage.cloud.google.com before uploading a new version, then reading after by storage.cloud.google.com may serve the old version while storage.googleapis.com returns the new one. Caching can also be location dependent.
If you can't allow an hour of caching, set Cache control to no-cache.
I need to save an image on Amazon WS S3 using Django.
I have this code:
try:
img_temp = NamedTemporaryFile(delete=True)
img_temp.write(urllib2.urlopen(urlImageTo Download).read())
img_temp.flush()
p.image.save('image.jpg', File(img_temp))
p.save() //P is an ImageField
except urllib2.HTTPError:
continue
I integrated django-storages to save directly the image in Amazon S3.
I have to save the image during a request (in the request I have an URL of an image that I have to download it and resave in Amazon S3) but I think that I can to have this line:
p.image.save('image.jpg', File(img_temp))
asynchronous.
In this case I can return immediately the response of my request while I can save the image on S3
How can I have this? Is Possible?
I have an ImageField in one of my Django models. Each of these images has a user (or group of users) who have access to them; no other users should be able to see them.
The ImageField stores the image file in the media root. Any web request for that image via the image path) bypass django and get served directly by Apache.
How do I ensure that only the users that are authorized to request the images can actually get them?
add new view for serve image and store image in other path, apache cant server new path
now in new view check group of user serve image
and if not your user send 403
#login_required
def serve_file(request, context):
if <check if they have access to the file>:
filename = "/var/www/myfile.xyz"
response = HttpResponse(mimetype='application/force-download')
response['Content-Disposition']='attachment;filename="%s"'%filename
response["X-Sendfile"] = filename
response['Content-length'] = os.stat("debug.py").st_size
return response
return <error state>
I have a view to which I am trying to submit multiple ajax uploads via raw post data (e.g. via an octet-stream). These requests are submitted one after the other so that they process in parallel. The problem is that django thinks that only the last request is valid. For example, if I submit 5 files, the first four give:
Upload a valid image. The file you uploaded was either not an image or a corrupted image.
I'm guessing this occurs because somehow the requests overlap? And so the image isn't completely loaded before the form attempts to validate it?
And the last one works fine.
My upload view:
def upload(request):
form = UploadImageForm(request.POST, request.FILES)
print form
if form.is_valid():
# ..process image..
And my upload image form:
class UploadImageForm(forms.Form):
upload = forms.ImageField()
To submit the requests I'm using the html5uploader js pretty much right out of the box.
On a different not, have you tried https://github.com/blueimp/jQuery-File-Upload/ - is a pretty good non-flash based file uploader with progress bar.