How to generate high resolution thumbnails for only some aliases? - django

I'm working on a photography website in Django.
Because the site is “responsive”, we pre-generate numerous sizes of each image using set aliases. In particular, 7 images with a variety of widths starting with 960 to 3,840 pixels wide in 480px increments. These images will be used when a photo is shown full-screen (as in, not in a list-view).
The site is also built for HiDPI/Retina displays/devices. So, we'd like to use the setting: THUMBNAIL_HIGH_RESOLUTION to automatically prepare the #2x versions of some of the aliases, but most notably, NOT for the aliases used to create the range of 7 images used for the full-screen images above.
As this project is meant to show off the work of a photographer, we're using rather high quality settings, so each image starts out at roughly 3840x2160 pixels in size and, through our pre-generation becomes approximately 50MBs of JPGs. Unfortunately nearly 50% of this is pure waste, because we only use the #2x versions on an image when we show lists or collections of images on a page. These are generally only 300px/600px wide and are relatively tiny compared to our "full screen" image sets.
We've considered disabling THUMBNAIL_HIGH_RESOLUTION and just creating new aliases for the #2x versions, but it isn't clear how to generate the correct filenames with an alias.
So, how can we pre-generate HiDPI/Retina images with the standard #2x (or _2x) infix for just SOME of our aliases?
UPDATE: This is now a feature of easy_thumbnails! In aliases you can use HIGH_RESOLUTION: False to disable their creation, or HIGH_RESOLUTION: True to force them. Thanks #ChrisSmiley!

In easy-thumbnails-1.3, the #2x infix currently is hard-coded, but in the next version, users may choose another infix through a configuration setting. Have a look at this pull request for details.
To answer your second question, currently it is not possible to generate Retina thumbnails only for certain entries. easy-thumbnails has a all-or-none policy, but this in theory, could be changed.

Related

Style transfer on large image. (in chunks?)

I am looking into various style transfer models and I noted that they all have limited resolution (when running on Pixel 3, for example, I couldn't go beyond 1,024x1,024, OOM otherwise).
I've noticed a few apps (eg this app) which appear to be doing style transfer for up to ~10MP images, these apps also show progress bar which I guess means that they don't just call a single tensorflow "run" method for entire image as otherwise they won't know how much was processed.
I would guess they are using some sort of tiling, but naively splitting the image into 256x256 produces inconsistent style (not just on the borders).
As this seems like an obvious problem I tried to find any publications about this, but I couldn't find any. Am I missing something?
Thanks!
I would guess people split the model into multiple ones (for VGG it is easy to do manually, eg. via layers) and then use model_summary Keras function (or benchmarks) to estimate relative time it takes for each step and thus guide progress bar. Such separation probably also saves memory as tensorflow lite might not be clever enough to reuse memory storing intermediate activations from lower layers once they are not needed.

Machine Vision - Hash An Image

I'm in the feasibility stage of a project and wanted to know whether the following was doable using Machine Vision:
If I wanted to see if two files were identical, I would use a hashing function of sorts (e.g. sha1 or md5) on the files and store the results in a database.
However, if I have two images where say image 1 is 90% quality and image 2 is 100% quality, then this will not work as they will have different hashes.
Using machine vision, is it possible to "look" at an image and create a signature from it, so that when another image is encountered, we can say "have we already got this image in the system", and if so, disregard the new image, and if not, save the image?
I know that you are able to perform Machine Vision comparison between two known images, e.g.:
https://www.pyimagesearch.com/2014/09/15/python-compare-two-images/
(there's a lot of code in there so I cannot simply paste in here for reference, unfortunately)
but an image by image comparison would be extremely expensive.
Thanks
python provide the module called : imagehash :
imagehash - encodes the image which is commend bellow.
from PIL import Image
import imagehash
hash = imagehash.average_hash(Image.open('./image_1.png'))
print(hash)
# d879f8f89b1bbf
otherhash = imagehash.average_hash(Image.open('./image_2.png'))
print(otherhash)
# ffff3720200ffff
print(hash == otherhash)
# False
print(hash)
above is the python code which will print "true" if images are identical and "false" if images are not identical.
Thanks.
I do not what you mean by 90% and 100%. Are they image compression quality using JPEG? Regardless of this, you can match images using many methods for example using image processing only approaches such as SIFT, SURF, BRISK, ORB, FREAK or machine learning approaches such as Siamese networks. However, they are heavy for simple computer to run (on my computer powered by core-i7 2670QM, from 100 to 2000 ms for a 2 mega pixel match), specially if you run them without parallelism ( programming without GPU, AVX, ...), specially the last one.
For hashing you may also use perceptual hash functions. They are widely used in finding cases of online copyright infringement as well as in digital forensics because of the ability to have a correlation between hashes so similar data can be found (for instance with a differing watermark) [1]. Also you can search copy move forgery and read papers around it and see how similar images could be found.

How to draw connections between items in a QTreeView

I wonder how to draw the lines connecting the items in a QTreeView as illustrated in the picture under Tree Model. My program will run on different platforms and thus use different styles. Can I guarantee that the items are drawn as desired?
I feel, using style sheets might be problematic because certain styles do not print such lines and using a delagate might lead me into issues of double drawing.
There's an example in the documentation here showing exactly what you want to achieve using style sheets.
Please note that when you use style sheets QStyleSheetStyle kicks in, irregardless from the QStyle your application is using at the moment. So if you decide to go this way you will override the look and feel of your control the same way, irregardless from the target platform.
If that is a problem, you may consider to use style sheets only for certain platforms. As an example:
#ifdef Q_OS_MAC
myControl->setStyleSheet(":/my_stylesheet_for_mac.qss");
#endif
Back to the example in the documentation, it uses a few images containing all the various lines (vertical, horizontal, branch, etc) and the ::branch subcontrol and its states to determine which image to use.
The result is something like this:
.
Obviously, you must change the code to show the vline picture instead of the arrows.
As a side node, I may suggest to consider why you want to do this if you are using native styles. If your application has a native look and feel, you should not alter it in any way. That is, if the target platform doesn't render lines to connect tree view items, then you shouldn't add those.
However, if your application is not required to look native across all the target platforms, you may consider using the same style (e.g. Fusion) and deliver the same user experience no matter what the platform is.

insert base64 strings in Dexiejs

I am building an ionic 3 app and I want to set up an upload based on the ImagePicker Cordova plugin.
I use Dexie to persist some data, and I wonder if persisting whole base64 strings would be alright. Or is it too heavy?
I want to persist the images chosen with the image picker. When an upload is suspended or stopped i would be able to restart the upload for those.
Anybody using any other type of persistence of Base64 images?
Thank you
It depends on the size of the images. Unless images are larger than 10 megabytes, I think you are safe. There is no direct limit of document sizes in indexedDB except for the quota you are given for the whole db instance, which can vary per platform and can be extended on modern platforms using navigator.storage.persist(). Do not index the property containing the large string though, since it would affect performance badly and eventually trigger unknown bugs.
In case you target modern platforms (Chromium, Firefox and Safari 10.1), you don't need to convert the images to base64. Instead you can store the binary data directly in a property of type Uint8Array.

How to use OpenStreetMap background on Matplotlib Basemap

This should be simple, but when I look for it I just find web packages. I need something better than as oriented on This Blog. Maybe using .oms file or shapefiles. Some way to give bbox and get the OpenStreetMap background on Basemap map.
I found some questions like this on Stack, but the answers directs to, or download the .png file on OpenStreetMap website, or to use some web package.
I would suggest not to try to make something work, which is not made (yet) to work together.
There is a simple way to achieve what you want with Mplleaflet.
https://github.com/jwass/mplleaflet
The library allows you to visualize geographic data on a beautiful interactive openstreetmap. Map projection of data in long lat format is automatically performed.
Installation in windows and ubuntu is easy:
pip install mplleaflet
You can start with the provided examples and go from there.
There are many libraries today that can do this for you - smopy, folium and tilemapbase are three examples from my recent use.
Each of these tools fetch map tiles from the one of several servers that host OSM or other (Stamen, Carto, etc) map tiles and then allows you to display and plot on them using matplotlib. Tilemapbase also caches the tiles locally so that they are not fetched again the next time.
But there does not seem to be a readily available tool yet, based on my recent experience, to use offline tilesets (such as a compressed .mbtiles file) as background for matplotlib plotting.
This link contains a survey of the above tools and more - https://github.com/ispmarin/maps
EDIT
I had mentioned in my previous answer that Tilemapbase did not work for some geographical locations in the world, and hence explicitly recommended not to use it. But it turns out I was wrong, and I apologize for that. It actually works great! The problem in my case was embarrassingly simple - I had reversed the order or lat and lon while fetching tiles, and hence it always fetched blank tiles for certain geographical locations, leading me to assume that it did not work for those locations.
I had raised the issue in github and it was immediately resolved by the developers. See it here - https://github.com/MatthewDaws/TileMapBase/issues/7
Note the responses:
Coordinates are to be provided in order (1) longitude, (2) latitude. If you copied them from Google Maps, they will be in lat/lon order and you have to flip them. So your map image is not empty, it's just a location in the ocean north of Norway.
And from the developer himself:
Yes, when I wrote the code, it seemed that there wasn't a universal standard for ordering. So I chose the one which is different to Google Maps. The method name from_lonlat should give a hint as to the correct ordering...
For those who are using Cartopy, this is relatively simple:
import matplotlib.pyplot as pl
import numpy as np
import cartopy.crs as ccrs
import cartopy.io.img_tiles as cimgt
request = cimgt.OSM()
# Bounds: (lon_min, lon_max, lat_min, lat_max):
extent = [1, 13, 45, 53]
ax = pl.axes(projection=request.crs)
ax.set_extent(extent)
ax.add_image(request, 5) # 5 = zoom level
# Just some random points/lines:
pl.scatter(4.92, 51.97, transform=ccrs.PlateCarree())
pl.plot([4.92, 9], [51.97, 47], transform=ccrs.PlateCarree())
This produces:
You can download the necessary tiles yourself from one of the tile servers. The OSM wiki explains the technical details behind slippy map tilenames and also includes examples for various programming and scripting languages.
Please also read about the tile usage policy and keep in mind that different tile serves may have different policies.
This is very easy with geopandas and contextily.
Have a look at https://geopandas.org/gallery/plotting_basemap_background.html.