I have a lot of user uploaded content and I want to validate that uploaded image files are not, in fact, malicious scripts. In the Django documentation, it states that ImageField:
"Inherits all attributes and methods from FileField, but also validates that the uploaded object is a valid image."
Is that totally accurate? I've read that compressing or otherwise manipulating an image file is a good validation test. I'm assuming that PIL does something like this....
Will ImageField go a long way toward covering my image upload security?
Django validates the image uploaded via form using PIL.
See https://code.djangoproject.com/browser/django/trunk/django/forms/fields.py#L519
try:
# load() is the only method that can spot a truncated JPEG,
# but it cannot be called sanely after verify()
trial_image = Image.open(file)
trial_image.load()
# Since we're about to use the file again we have to reset the
# file object if possible.
if hasattr(file, 'reset'):
file.reset()
# verify() is the only method that can spot a corrupt PNG,
# but it must be called immediately after the constructor
trial_image = Image.open(file)
trial_image.verify()
...
except Exception: # Python Imaging Library doesn't recognize it as an image
raise ValidationError(self.error_messages['invalid_image'])
PIL documentation states the following about verify():
Attempts to determine if the file is broken, without actually decoding
the image data. If this method finds any problems, it raises suitable
exceptions. This method only works on a newly opened image; if the
image has already been loaded, the result is undefined. Also, if you
need to load the image after using this method, you must reopen the
image file.
You should also note that ImageField is only validated when uploaded using form. If you save the model your self (e.g. using some kind of download script), the validation is not performed.
Another test is with the file command. It checks for the presence of "magic numbers" in the file to determine its type. On my system, the file package includes libmagic as well as a ctypes-based wrapper /usr/lib64/python2.7/site-packages/magic.py. It looks like you use it like:
import magic
ms = magic.open(magic.MAGIC_NONE)
ms.load()
type = ms.file("/path/to/some/file")
print type
f = file("/path/to/some/file", "r")
buffer = f.read(4096)
f.close()
type = ms.buffer(buffer)
print type
ms.close()
(Code from here.)
As to your original question: "Read the Source, Luke."
django/core/files/images.py:
"""
Utility functions for handling images.
Requires PIL, as you might imagine.
"""
from django.core.files import File
class ImageFile(File):
"""
A mixin for use alongside django.core.files.base.File, which provides
additional features for dealing with images.
"""
def _get_width(self):
return self._get_image_dimensions()[0]
width = property(_get_width)
def _get_height(self):
return self._get_image_dimensions()[1]
height = property(_get_height)
def _get_image_dimensions(self):
if not hasattr(self, '_dimensions_cache'):
close = self.closed
self.open()
self._dimensions_cache = get_image_dimensions(self, close=close)
return self._dimensions_cache
def get_image_dimensions(file_or_path, close=False):
"""
Returns the (width, height) of an image, given an open file or a path. Set
'close' to True to close the file at the end if it is initially in an open
state.
"""
# Try to import PIL in either of the two ways it can end up installed.
try:
from PIL import ImageFile as PILImageFile
except ImportError:
import ImageFile as PILImageFile
p = PILImageFile.Parser()
if hasattr(file_or_path, 'read'):
file = file_or_path
file_pos = file.tell()
file.seek(0)
else:
file = open(file_or_path, 'rb')
close = True
try:
while 1:
data = file.read(1024)
if not data:
break
p.feed(data)
if p.image:
return p.image.size
return None
finally:
if close:
file.close()
else:
file.seek(file_pos)
So it looks like it just reads the file 1024 bytes at a time until PIL says it's an image, then stops. This obviously does not integrity-check the entire file, so it really depends on what you mean by "covering my image upload security": illicit data could be appended to an image and passed through your site. Someone could DOS your site by uploading a lot of junk or a really big file. You could be vulnerable to an injection attack if you don't check any uploaded captions or make assumptions about the image's uploaded filename. And so on.
Related
How can I direct the destination of the output file to my db?
My models.py is structured like so:
class Model(models.Model):
char = models.CharField(max_length=50, null=False, blank=False)
file = models.FileField(upload_to=upload_location, null=True, blank=True)
I have the user enter a value for 'char', and then the value of 'char' is printed on to a file. The process of successfully printing onto the file is working, however, the file is outputting to my source directory.
My goal is to have the output file 'pdf01.pdf' output to my db and be represented as 'file' so that the admin can read it.
Much of the information in the Dango docs has been focussed on directing the path of objects imported by the user directly, not on files that have been created internally. I have been reading mostly from these docs:
Models-Fields
Models
File response objects
Outputting PDFs
I have seen it recommend to write to a buffer, not a file, then save the buffer contents to my db however I haven't been able to find many examples of how to do that relevant to my situation online.
Perhaps there is a relevant gap in my knowledge regarding buffers and BytesIO? Here is the function I have been using to alter the pdf, I have been using BytesIO to temporarily store files throughout the process but have not been able to figure out how to use it to direct the output anywhere specific.
can = canvas.Canvas(BytesIO(), pagesize=letter)
can.drawString(10, 10, char)
can.save()
BytesIO().seek(0)
text_pdf = PdfFileReader(BytesIO())
base_file = PdfFileReader(open("media/01.pdf", "rb"))
page = base_file.getPage(0)
page.mergePage(text_pdf.getPage(0))
PdfFileWriter().addPage(page)
PdfFileWriter().write(open("pdf01.pdf", "wb")
FileField does not store files directly in the database. Files get uploaded in a location on the filesystem determined by the upload_to argument. Only some metadata are stored in the DB, including the path of the file in your filesystem.
If you want to have the contents of the files in the database, you could create a new File model that includes a BinaryField to store the data and a CharField to store the URL from which the file can be fetched. To feed the data of PdfFileWriter to the binary field of Django, perhaps the most appropriate would be to use BytesIO.
I found this workaround to direct the file to a desired location (in this case both my media_cdn folder and also output it to an admin.)
I set up an admin action to perform the function that outputs the file so the admin will have access to both the output version in the form of both an HTTP response and through the media_cdn storage.
Hope this helps anyone who struggles with the same problem.
#admin.py
class edit_and_output():
def output:
author = Account.email
#alter file . . .
with open('media_cdn/account/{0}.pdf'.format(author), 'wb') as out_file:
output.write(out_file)
response = HttpResponse(content_type='application/pdf')
response['Content-Disposition'] = 'attachment;filename="{0}.pdf"'.format(author)
output.write(response)
I have a Django app where users can upload images and can have a processed version of the images if they want. and the processing function returns the path, so my approach was
model2.processed_image = processingfunction( model1.uploaded_image.path)
and as the processing function returns path here's how it looks in my admin view
not like the normally uploaded images
In my machine it worked correctly and I always get a 404 error for the processed ones while the normally uploaded is shown correctly when I try to change the url of the processed from
myurl.com/media/home/ubuntu/Eyelizer/media/path/to/the/image
to
myurl.com/media/path/to/the/image
so how can I fix this ? is there a better approach to saving the images manually to the database ?
I have the same function but returns a Pil.image.image object and I've tried many methods to save it in a model but I didn't know how so I've made the function return a file path.
I think the problem is from nginx where I define the media path.
should/can I override the url attribute of the processedimage?
making something like
model.processed_image.url = media/somefolder/filename
Instead of using the PIL Image directly, create a django.core.files.File.
Example:
from io import BytesIO
from django.core.files import File
img_io = BytesIO() # create a BytesIO object to temporarily save the file in memory
img = processingfunction( model1.uploaded_image.path)
img.save(img_io, 'PNG') # save the PIL image to the BytesIO object
img_file = File(thumb_io, name='some-name.png') # create the File object
# you can use the `name` from `model1.uploaded_image` and use
# that above
# finally, pass the image file to your model field
model2.processed_image = img_file
To avoid repetition of this code, it would be a good idea to keep this code in processingfunction and return the File object directly from there.
My approach is a bit different from #Xyres's, I thought xyres's would make a duplicate of the existing image and create a new one and when I tried overriding the URL attribute it returned an error of
can't set the attribute
but when I saw this question and this ticket I tried making this and it worked
model2.processed_image = processingfunction(model1.uploaded_image.path)
full_path = model2.processed_image.path
model2.processed_image.name = full_path.split('media')[1]
so that explicitly making the URL media/path/to/image and cut out all of the unneeded parts like home/ubuntu and stuff
I'm writing a Django function that takes some user input, and generates a pdf for the user. However, the process for generating the pdf is quite intensive, and I'll get a lot of repeated requests so I'd like to store the generated pdfs on the server and check if they already exist before generating them.
The problem is that django-wkhtmltopdf (which I'm using for generation) is meant to return to the user directly, and I'm not sure how to store it on the file.
I have the following, which works for returning a pdf at /pdf:
urls.py
urlpatterns = [
url(r'^pdf$', views.createPDF.as_view(template_name='site/pdftemplate.html', filename='my_pdf.pdf'))
]
views.py
class createPDF(PDFTemplateView):
filename = 'my_pdf.pdf'
template_name = 'site/pdftemplate.html'
So that works fine to create a pdf. What I'd like is to call that view from another view and save the result. Here's what I've got so far:
#Create pdf
pdf = createPDF.as_view(template_name='site/pdftemplate.html', filename='my_pdf.pdf')
pdf = pdf(request).render()
pdfPath = os.path.join(settings.TEMP_DIR,'temp.pdf')
with open(pdfPath, 'w') as f:
f.write(pdf.content)
This creates temp.pdf and is about the size I'd expect but the file isn't valid (it renders as a single completely blank page).
Any suggestions?
Elaborating on the previous answer given: to generate a pdf file and save to disk do this anywhere in your view:
...
context = {...} # build your context
# generate response
response = PDFTemplateResponse(
request=self.request,
template=self.template_name,
filename='file.pdf',
context=context,
cmd_options={'load-error-handling': 'ignore'})
# write the rendered content to a file
with open("file.pdf", "wb") as f:
f.write(response.rendered_content)
...
I have used this code in a TemplateView class so request and template fields were set like that, you may have to set it to whatever is appropriate in your particular case.
Well, you need to take a look to the code of wkhtmltopdf, first you need to use the class PDFTemplateResponse in wkhtmltopdf.views to get access to the rendered_content property, this property get us access to the pdf file:
response = PDFTemplateResponse(
request=<your_view_request>,
template=<your_template_to_render>,
filename=<pdf_filename.pdf>,
context=<a_dcitionary_to_render>,
cmd_options={'load-error-handling': 'ignore'})
Now you could use the rendered_content property to get access to the pdf file:
mail.attach('pdf_filename.pdf', response.rendered_content, 'application/pdf')
In my case I'm using this pdf to attach to an email, you could store it.
I wrote a cmd line routine to import a kml file into a geoDjango application, which works fine when you feed it a locally saved KML file path (using the datasource object).
Now I am writing a web file upload dialog, to achieve the same thing. This is the beginning of the code that I have, problem is, that the GDAL DataSource object does not seem to understand Djangos UploadedFile format. It is held in memory and not a file path as expected.
What would be the best strategy to convert the UploadedFile to a normal file, and access this through a path? I dont want to keep the file after processing.
def createFeatureSet(request):
if request.method == 'POST':
inMemoryFile = request.FILES['myfile']
name = inMemoryFile.name
POSTGIS_SRID = 900913
ds = DataSource(inMemoryFile) #This line doesnt work!!!
for layer in ds:
if layer.geom_type in (OGRGeomType('Point'), OGRGeomType('Point25D'), OGRGeomType('MultiPoint'), OGRGeomType('MultiPoint25D')):
layerGeomType = OGRGeomType('MultiPoint').django
elif layer.geom_type in (OGRGeomType('LineString'),OGRGeomType('LineString25D'), OGRGeomType('MultiLineString'), OGRGeomType('MultiLineString25D')):
layerGeomType = OGRGeomType('MultiLineString').django
elif layer.geom_type in (OGRGeomType('Polygon'), OGRGeomType('Polygon25D'), OGRGeomType('MultiPolygon'), OGRGeomType('MultiPolygon25D')):
layerGeomType = OGRGeomType('MultiPolygon').django
DataSource is a wrapper around GDAL's C API and needs an actual file. You'll need to write your upload somewhere on the disk, for insance using a tempfile. Then you can pass the file to DataSource.
Here is a suggested solution using a tempfile. I put the processing code in its own function which is now called.
f = request.FILES['myfile']
temp = tempfile.NamedTemporaryFile(delete=False)
temp.write(f.read())
temp.close()
createFeatureSet(temp.name, source_SRID= 900913)
I'd like to store uploaded files into a specific directory that depends on the URI of the POST request. Perhaps, I'd also like to rename the file to something fixed (the name of the file input for example) so I have an easy way to grep the file system, etc. and also to avoid possible security problems.
What's the preferred way to do this in Django?
Edit: I should clarify that I'd be interested in possibly doing this as a file upload handler to avoid writing a large file twice to the file system.
Edit2: I suppose one can just 'mv' the tmp file to a new location. That's a cheap operation if on the same file system.
Fixed olooney example. It is working now
#csrf_exempt
def upload_video_file(request):
folder = 'tmp_dir2/' #request.path.replace("/", "_")
uploaded_filename = request.FILES['file'].name
BASE_PATH = '/home/'
# create the folder if it doesn't exist.
try:
os.mkdir(os.path.join(BASE_PATH, folder))
except:
pass
# save the uploaded file inside that folder.
full_filename = os.path.join(BASE_PATH, folder, uploaded_filename)
fout = open(full_filename, 'wb+')
file_content = ContentFile( request.FILES['file'].read() )
try:
# Iterate through the chunks.
for chunk in file_content.chunks():
fout.write(chunk)
fout.close()
html = "<html><body>SAVED</body></html>"
return HttpResponse(html)
except:
html = "<html><body>NOT SAVED</body></html>"
return HttpResponse(html)
Django gives you total control over where (and if) you save files. See: http://docs.djangoproject.com/en/dev/topics/http/file-uploads/
The below example shows how to combine the URL and the name of the uploaded file and write the file out to disk:
def upload(request):
folder = request.path.replace("/", "_")
uploaded_filename = request.FILES['file'].name
# create the folder if it doesn't exist.
try:
os.mkdir(os.path.join(BASE_PATH, folder))
except:
pass
# save the uploaded file inside that folder.
full_filename = os.path.join(BASE_PATH, folder, uploaded_filename)
fout = open(full_filename, 'wb+')
# Iterate through the chunks.
for chunk in fout.chunks():
fout.write(chunk)
fout.close()
Edit: How to do this with a FileUploadHandler? It traced down through the code and it seems like you need to do four things to repurpose the TemporaryFileUploadHandler to save outside of FILE_UPLOAD_TEMP_DIR:
extend TemporaryUploadedFile and override init() to pass through a different directory to NamedTemporaryFile. It can use the try mkdir except for pass I showed above.
extend TemporaryFileUploadHandler and override new_file() to use the above class.
also extend init() to accept the directory where you want the folder to go.
Dynamically add the request handler, passing through a directory determined from the URL:
request.upload_handlers = [ProgressBarUploadHandler(request.path.replace('/', '_')]
While non-trivial, it's still easier than writing a handler from scratch: In particular, you won't have to write a single line of error-prone buffered reading. Steps 3 and 4 are necessary because FileUploadHandlers are not passed request information by default, I believe, so you'll have to tell it separately if you want to use the URL somehow.
I can't really recommend writing a custom FileUploadHandler for this. It's really mixing layers of responsibility. Relative to the speed of uploading a file over the internet, doing a local file copy is insignificant. And if the file's small, Django will just keep it in memory without writing it out to a temp file. I have a bad feeling that you'll get all this working and find you can't even measure the performance difference.