Displaying image using ColdFusion from BLOB - coldfusion

I want to display an image from a database. I'm using data type BLOB for that image.
I've already tried #CharsetEncode(viewPoint.ppp_icons, "ASCII")#, but it didn't work.

<cfimage action="writeToBrowser" source="#imageBlob#">
http://livedocs.adobe.com/coldfusion/8/htmldocs/Tags_i_02.html
OR... Use Data URI scheme (w/ limited browser support).
<img src="data:image/png;base64,#toBase64(imageBlob)#" />

<cfcontent reset="Yes" type="image/gif" variable="#QueryName.BlobColumn#" />
EDIT: To clarify, you would place this code in a separate template. Which you would place the call for in the src attribute of your img tag. You would pass the primary key of the database table and the new template would lookup and grab the blob column to be outputted.
I believe it would be better to keep all blob data in separate requests, because if you were trying to display many files on a page it would be best to let the page load up first rather than have the page load time wait until all blob data could be downloaded from the sql server to the coldfusion server. Reducing initial page load times is of crucial importance to usability.
As an addendum, if these files are going to be accessed frequently, then it will also be best to have the file be cache on the web server and have the separate template redirect to the cached file.

Related

Correct way to fetch data from an aws server into a flutter app?

I have a general understanding question. I am building a flutter app that relies on a content library containing text files, latex equations, images, pdfs, videos etc.
The content lies on an aws amplify backend. Depending on the navigation of the user in the app, the corresponding data is fetched and displayed.
I am not sure about the correct way of fetching the data. The current method (which works) is that the data is stored in an S3 bucket. When data is requested, the data is downloaded to a temporary directory and then opened and processed in the app. This is actually not slow, but I feel that it is not the way it should be done.
When data is downloaded a file transfer notification pops up, which bothers me because it is shown all the time. Also I would like to read the data directly with something like a get request, without downloading the file first (specially for text files, which I would like to read directly into a String). But here I don't know how it works, because I don't see that you can save data in a file system with the other amplify services like data store or the rest api. Also, the S3 bucket is an intuitive way of storing data that is easy to use for the content creators of my company, for me it seems that the S3 bucket is the way to go. However with S3 I have only figured out the download method to fetch data.
Could someone give me a hint on what is the correct approach for this use case? Thank you very much!

Google Earth: is a dynamic parametrized KML possible?

I made a large KML file ca. 23 MBytes long. Google Earth renders it very long period of time, lags and occupies 1GB RAM and more. On slower computers it may also not to render some areas.
So the idea is to use a parametrized GET request to the server returning kml data only for a region with specified boundaries.
Can GoogleEarth initiate and use such requests?
What you're asking can be done with a NetworkLink. If you dynamically generate the KML from a servlet, web-service, script, etc. then you can instruct Google Earth to send the bounding box for its view from which you can generate the KML to return. This approach requires hosting a custom "service" on an application server/web server that can generate KML in response to requests sent by Google Earth.
In your root-level NetworkLink you need to define refreshMode=onChange to refesh when view is changed along with the URL to the servlet. Recommended to set viewRefreshMode=onStop with a viewRefreshTime element so the data is only fetched 1 second after the user stops zooming/moving around otherwise the data is continually refreshed. Also the viewFormat is needed to instruct Google Earth to return the bounding box of the view. In this example, the BBOX parameter is added to the HTTP parameters sent to the servlet in an HTTP GET request.
<Link>
<href>servlet-url</href>
<refreshMode>onChange</refreshMode>
<viewRefreshMode>onStop</viewRefreshMode>
<viewRefreshTime>1</viewRefreshTime>
<viewFormat>BBOX=[bboxWest],[bboxSouth],[bboxEast],[bboxNorth]</viewFormat>
</Link>
If your data spans a large area then you could break up the data into separate KML files then specify Region-based NetworkLinks in parent KML file. This approach would allow you to generate the data once as static KML files and only serve up what data is "active" based on the user's view.
Related Tutorial:
https://developers.google.com/kml/documentation/regions#regionbasednl
Reference:
https://developers.google.com/kml/documentation/kmlreference#networklink
https://developers.google.com/kml/documentation/kmlreference#region
https://developers.google.com/kml/documentation/kmlreference#viewformat
Yes, this isn't a problem at all. You add the source URL of you KML in Google Earth as a URL with parameters and then load it as several separate sources. With that approach though you are only 'dynamically' providing the criteria at the time you add the KML to GE, and from then on it looks like any other static KML file you would have loaded.
EDIT I see now (logging into GE) that it actually calls these network links as described by #JasonM1 (under Add->NetworkLink)

How to mix Django, Uploadify, and S3Boto Storage Backend?

Background
I'm doing fairly big file uploads on Django. File size is generally 10MB-100MB.
I'm on Heroku and I've been hitting the request timeout of 30 seconds.
The Beginning
In order to get around the limit, Heroku's recommendation is to upload from the browser DIRECTLY to S3.
Amazon documents this by showing you how to write an HTML form to perform the upload.
Since I'm on Django, rather than write the HTML by hand, I'm using django-uploadify-s3 (example). This provides me with an SWF object, wrapped in JS, that performs the actual upload.
This part is working fine! Hooray!
The Problem
The problem is in tying that data back to my Django model in a sane way.
Right now the data comes back as a simple URL string, pointing to the file's location.
However, I was previously using S3 Boto from django-storages to manage all of my files as FileFields, backed by the delightful S3BotoStorageFile.
To reiterate, S3 Boto is working great in isolation, Uploadify is working great in isolation, the problem is in putting the two together.
My understanding is that the only way to populate the FileField is by providing both the filename AND the file content. When you're uploading files from the browser to Django, this is no problem, as Django has the file content in a buffer and can do whatever it likes with it. However, when doing direct-to-S3 uploads like me, Django only receives the file name and URL, not the binary data, so I can't properly populate the FieldFile.
Cry For Help
Anyone know a graceful way to use S3Boto's FileField in conjunction with direct-to-S3 uploading?
Else, what's the best way to manage an S3 file just based on its URL? Including setting expiration, key id, etc.
Many thanks!
Use a URLField.
I had a similar issue where i want to store file to s3 either directly using FileField or i have an option for the user to input the url directly. So to circumvent that, i used 2 fields in my model, one for FileField and one for URLField. And in the template i could use 'or' to see which one exists and to use that like {{ instance.filefield or instance.url }}.
This is untested, but you should be able to use:
from django.core.files.storage import default_storage
f = default_storage.open('name_you_expect_in_s3', 'r')
#f is an instance of S3BotoStorageFile, and can be assigned to a field
obj, created = YourObject.objects.get_or_create(**stuff_you_know)
obj.s3file_field = f
obj.save()
I think this should set up the local pointer to s3 and save it, without over writing the content.
ETA: You should do this only after the upload completes on S3 and you know the key in s3.
Checkout django-filetransfers. Looks like it plays nice with django-storages.
I've never used django, so ymmv :) but why not just write a single byte to populate the content? That way, you can still use FieldFile.
I'm thinking that writing actual SQL may be the easiest solution here. Alternatively you could subclass S3BotoStorage, override the _save method and allow for an optional kwarg of filepath which sidesteps all the other saving stuff and just returns the cleaned_name.

Trouble uploading lots of file in Django

I'm having problems when uploading lots of files in Django. The context is the following: I've a spreadsheet with one or more columns being image filenames; those images are being uploaded through an form with input type=file and the option multiple.
With few lines - say 70, everything goes fine. But with more lines, and consequently more images, there's a IOError happening in random positions.
I've checked several questions about file/image upload in Django but couldn't find any that is related to my problem.
The model I'm using is the Product model of LFS (www.getlfs.com). We are developing a system that is based on LFS and to facilitate the creation of dozens of products in batch we wrote some views and templates to receive the main product properties through a spreadsheet. Each line is a product and the columns are the desired properties.
LFS uses a custom class ImageWithThumbsField(ImageField) to store the product's image and when saving the product instance (got from the spreadsheet), all thumbnails are generated. This is a time (cpu) consuming task, and my initial guess is that for some reason the temporary file is deleted before all processing had occurred.
Is there a way to keep these uploaded files for more time? Any other approach suggested to be able to process hundreds of uploaded files? Any hints on what can be happening?
Hope you can understand my question. I can post code if need.
Links to relevant portions of LFS code:
where thumbnails are generated:
https://github.com/diefenbach/django-lfs/blob/master/lfs/core/fields/thumbs.py
product model
https://github.com/diefenbach/django-lfs/blob/master/lfs/catalog/models.py
Thanks in advance!
It sounds like you are running out of memory. When django processess uploads, until the form is validated all of the files are either:
kept in memory inside the python/wsgi process/worker. (Usual mode of op for runserver)
In this case, you are uploading enough photos to fill up the process memory and running out of space. This will be non-deterministic as to where the IOError happens as you can imagine (GC Dependent).
Temporarily stored in /tmp/ (usual setup of apache)
In this case, the webserver's ramfs is full of images that have not yet been written to disk. In this case it should IOError arround the same place.
In either case, you should not be bulk uploading images in this way anyway. Apache/Django is not designed for it. Try uploading a single product/image per request/response, and all your problems will go away.

How do I track images embedded in HTML?

I'd like to track the views/impressions of images on web pages, but still allow the images to be embedded in HTML, like in the "img src="http://mysite.com/upload/myimage.jpg"/" element.
I know in Windows I can write a handler for ".jpg" so the URL will actually trigger a handling function instead of loading the images from disk. Is it possible to do that in python/django on Ubuntu server? Can web browser still cache the jpg files if it is not a straight file path?
It looks to me that this is how google picasaweb handles the image file name. I'd like to get some ideas on how to implement that.
Thanks!
-Yi
If you want images to not be cached, just append a timestamp to them. This example is in PHP, but you get the idea:
<img src=<?php echo '"../images/myimage.gif?'.time().'"'; /* Append time so image is not cached */ ?>
Then you can analyze your logs to track views, etc.