In my website i'm now converting my upload images into webp, because is smaller than the other formats, the users will load my pages more faster(Also mobile user). But it's take some time to converter a medium image.
import StringIO
import time
from PIL import Image as PilImage
img = PilImage.open('222.jpg')
originalThumbStr = StringIO.StringIO()
now = time.time()
img.convert('RGBA').save(originalThumbStr, 'webp', quality=75)
print(time.time() - now)
It's take 2,8 seconds to convert this follow image:
860kbytes, 1920 x 1080
My memory is 8gb ram, with a processor with 4 cores(Intel I5), without GPU.
I'm using Pillow==5.4.1.
Is there a faster way to convert a image into WEBB more faster. 2,8s it's seems to long to wait.
If you want them done fast, use vips. So, taking your 1920x1080 image and using vips in the Terminal:
vips webpsave autumn.jpg autumn.webp --Q 70
That takes 0.3s on my MacBook Pro, i.e. it is 10x faster than the 3s your PIL implementation achieves.
If you want lots done really fast, use GNU Parallel and vips. So, I made 100 copies of your image and converted the whole lot to WEBP in parallel like this:
parallel vips webpsave {} {#}.webp --Q 70 ::: *jpg
That took 4.9s for 100 copies of your image, i.e. it is 50x faster than the 3s your PIL implementation achieves.
You could also use the pyvips binding - I am no expert on this but this works and takes 0.3s too:
#!/usr/bin/env python3
import pyvips
# VIPS
img = pyvips.Image.new_from_file("autumn.jpg", access='sequential')
img.write_to_file("autumn.webp")
So, my best suggestion would be to take the 2 lines of code above and use a multiprocessing pool or multithreading approach to get a whole directory of images processed. That could look like this:
#!/usr/bin/env python3
import pyvips
from glob import glob
from pathlib import Path
from multiprocessing import Pool
def doOne(f):
img = pyvips.Image.new_from_file(f, access='sequential')
webpname = Path(f).stem + ".webp"
img.write_to_file(webpname)
if __name__ == '__main__':
files = glob("*.jpg")
with Pool(12) as pool:
pool.map(doOne, files)
That takes 3.3s to convert 100 copies of your image into WEBP equivalents on my 12-core MacBook Pro with NVME disk.
Related
I need to extract the number of pages and their sizes in px/mm/cm/some-unit from PDF files using Python (sadly, 2.7, because it's a legacy project). The problem is that the files can be truly huge (hundreds of MiBs) because they'll contain large images.
I do not care for this content and I really want just a list of page sizes from the file, with as little consumption of RAM as possible.
I found quite a few libraries that can do that (included, but not limited, to the ones in the answers here), but none provide any remarks on the memory usage, and I suspect that most of them - if not all - read the whole file in memory before doing anything with it, which doesn't fit my purpose.
Are there any libraries that extract only structure and give me the data that I need without clogging my RAM?
pyvips can do this. It loads the file structure when you open the PDF and only renders each page when you ask for pixels.
For example:
#!/usr/bin/python
import sys
import pyvips
i = 0
while True:
try:
x = pyvips.Image.new_from_file(sys.argv[1], dpi=300, page=i)
print("page =", i)
print("width =", x.width)
print("height =", x.height)
except:
break
i += 1
libvips 8.7, due in another week or so, adds a new metadata item called n-pages you can use to get the length of the document. Until that is released though you need to just keep incrementing the page number until you get an error.
Using this PDF, when I run the program I see:
$ /usr/bin/time -f %M:%e ./sizes.py ~/pics/r8.pdf
page = 0
width = 2480
height = 2480
page = 1
width = 2480
height = 2480
page = 2
width = 4960
height = 4960
...
page = 49
width = 2480
height = 2480
55400:0.19
So it opened 50 pages in 0.2s real time, with a total peak memory use of 55mb. That's with py3, but it works fine with py2 as well. The dimensions are in pixels at 300 DPI.
If you set page to -1, it'll load all the pages in the document as a single very tall image. All the pages need to be the same size for this though, sadly.
Inspired by the other answer, I found that libvips, which is suggested there, uses poppler (it can fall back to some other library if it cannot find poppler).
So, instead of using the superpowerful pyvips, which seems great for multiple types of documents, I went with just poppler, which has multiple Python libraries. I picked pdflib and came up with this solution:
from sys import argv
from pdflib import Document
doc = Document(argv[1])
for num, page in enumerate(doc, start=1):
print(num, tuple(2.54 * x / 72 for x in page.size))
The 2.54 * x / 72 part converts from px to cm, nothing more.
Speed and memory test on a 264MiB file with one huge image per page:
$ /usr/bin/time -f %M\ %e python t2.py big.pdf
1 (27.99926666666667, 20.997333333333337)
2 (27.99926666666667, 20.997333333333337)
...
56 (27.99926666666667, 20.997333333333337)
21856 0.09
Just for the reference, if anyone is looking a pure Python solution, I made a crude one which is available here. Not thoroughly tested and much, much slower than this (some 30sec for the above).
Background
I want to predict pathology images using keras with Inception-Resnet_v2. I have trained the model already and got a .hdf5 file. Because the pathology image is very large (for example: 20,000 x 20,000 pixels), so I have to scan the image to get small patches for prediction.
I want to speed up the prediction procedure using multiprocessing lib with python2.7. The main idea is using different subprocesses to scan different lines and then sending patches to model.
I saw somebody suggests importing keras and loading model in subprocesses. But I don't think it is suitable for my task. Loading model usingkeras.models.load_model() one time will take about 47s, which is very time-consuming. So I can't reload the model every time when I start a new subprocess.
Question
My question is can I load the model in my main process and pass it as a parameter to subprocesses?
I have tried two methods but both of them didn't work.
Method 1. Using multiprocessing.Pool
The code is :
import keras
from keras.models import load_model
import multiprocessing
def predict(num,model):
print dir(model)
print num
model.predict("image data, type:list")
if __name__ == '__main__':
model = load_model("path of hdf5 file")
list = [(1,model),(2,model),(3,model),(4,model),(5,model),(6,model)]
pool = multiprocessing.Pool(4)
pool.map(predict,list)
pool.close()
pool.join()
The output is
cPickle.PicklingError: Can't pickle <type 'module'>: attribute lookup __builtin__.module failed
I searched the error and found Pool can't map unpickelable parameters, so I try method 2.
Method 2. Using multiprocessing.Process
The code is
import keras
from keras.models import load_model
import multiprocessing
def predict(num,model):
print num
print dir(model)
model.predict("image data, type:list")
if __name__ == '__main__':
model = load_model("path of hdf5 file")
list = [(1,model),(2,model),(3,model),(4,model),(5,model),(6,model)]
proc = []
for i in range(4):
proc.append(multiprocessing.Process(predict, list[i]))
proc[i].start()
for i in range(4):
proc[i].join()
In Method 2, I can print dir(model). I think it means the model is passed to subprocesses successfully. But I got this error
E tensorflow/stream_executor/cuda/cuda_driver.cc:1296] failed to enqueue async memcpy from host to device: CUDA_ERROR_NOT_INITIALIZED; GPU dst: 0x13350b2200; host src: 0x2049e2400; size: 4=0x4
The environment which I use:
Ubuntu 16.04, python 2.7
keras 2.0.8 (tensorflow backend)
one Titan X, Driver version 384.98, CUDA 8.0
Looking forward to reply! Thanks!
Maybe you can use apply_async() instead of Pool()
and you can find more details here:
Python multiprocessing pickling error
Multi-processing works on CPU, while model prediction happened in GPU, which there is only one. I cannot see how multi-processing can help you on prediction.
Instead, I think you can use multi-processing to scan different patches, which you seems to have already managed to achieve. Then stack these patches into a batch or batches to predict in parallel in GPU.
As noted by Statham multiprocess requires all args to be compatible with pickle. This blog post describes how to save a keras model as a pickle: [http://zachmoshe.com/2017/04/03/pickling-keras-models.html][1]
It may be a sufficient workaround to get your keras model passed as an arg to multiprocess, but I have not tested the idea myself.
I will also add that I had better luck running two keras processes on a single gpu using windows rather than linux. On linux I was getting out of memory errors on the 2nd process, but the same memory allocation (45% of total GPU ram for each) worked on windows. In my case they were fits - for running predictions only, maybe the memory requirements are less.
I'm trying to suppress the output of aplay, but without success.
I know how to suppress print statments with stdout, but I didn't figured out how to archive the same result with pydub module.
For example when I play a sound with this code
from pydub import AudioSegment
from pydub.playback import play
next_kot = AudioSegment.from_ogg('/home/effe/Voz/Hello.ogg')
play(next_kot)
The output generated (in red!) is
avplay version 9.18-6:9.18-0ubuntu0.14.04.1, Copyright (c) 2003-2014
the Libav developers built on Mar 16 2015 13:19:10 with gcc 4.8
(Ubuntu 4.8.2-19ubuntu1) Input #0, wav, from '/tmp/tmp5DUj0a.wav':
Duration: 00:00:01.32, bitrate: 1411 kb/s
Stream #0.0: Audio: pcm_s16le, 44100 Hz, 2 channels, s16, 1411 kb/s
When you concatenate more audio is easy to lose key information.
Is there a way to cut off this kind of output?
Thanks.
I ran into this same issue and here is what I did. You can just create a new function named _play_with_ffplay_suppress and have the following code in it. The difference between the answer above and mine is that Jiaaro used
stdout=open(os.devnull, 'w')
stderr=os.stdout
and I used "devnull" after creating a variable with the same name. Very tiny difference but I hope it solves the error that you mentioned in the comment.
stderr=devnull
stdout=devnull
Here is my code:
#rhp - additional import added
import os
#rhp-custom function to supress output while playing mp3 files
def _play_with_ffplay_suppress(seg):
with NamedTemporaryFile("w+b", suffix=".wav") as f:
seg.export(f.name, "wav")
devnull = open(os.devnull, 'w')
subprocess.call([PLAYER,"-nodisp", "-autoexit", "-hide_banner", f.name],stdout=devnull, stderr=devnull)
For more information you can read about the call function in the subprocess module in Python here https://docs.python.org/3/library/subprocess.html.
the playback functions are really simple (and mostly included for easy use in the interactive python shell) - Your best bet is probably to make a copy of the playback code that is better suited to your needs:
if you're using ffplay this should work:
import os
from pydub.utils import get_player_name
PLAYER = get_player_name()
def play_with_ffplay(seg):
with NamedTemporaryFile("w+b", suffix=".wav") as f:
seg.export(f.name, "wav")
subprocess.call(
[PLAYER, "-nodisp", "-autoexit", f.name],
stdout=open(os.devnull, 'w'),
stderr=os.stdout
)
note: ffmpeg is always going to open a new window for ffplay though - I'd recommend installing pyaudio and using that for playback instead
I am programming in python 2.7 with NLTK library for both text prepossessing and classification in sentiment analysis. I am using nltk wrapper of scikit-learn algorithms. bellow code is after prepossessing and separation to train and test sets.
from nltk.classify.scikitlearn import SklearnClassifier
from sklearn.svm import SVC, LinearSVC, NuSVC
training_set = nltk.classify.util.apply_features(extractFeatures, trainTweets)
testing_set = nltk.classify.util.apply_features(extractFeatures, testTweets)
#LinearSVC
LinearSVC_classifier = SklearnClassifier(LinearSVC())
LinearSVC_classifier.train(training_set)
LinearSVCAccuracy = nltk.classify.accuracy(LinearSVC_classifier, testing_set)*100
print "LinearSVC accuracy percentage:" + str(LinearSVCAccuracy)
it works fine when the number of rows are like 4000 tweets for training, but when it increases to for example 10000 tweets, possess is getting killed with following error.
Memory cgroup out of memory: Kill process 24293 (python) score 848 or
sacrifice child
Killed process 24293, UID 29091, (python) total-vm:14569168kB,
anon-rss:14206656kB, file-rss:3412kB
Clocksource tsc unstable (delta = -17179861691 ns). Enable clocksource
failover by adding clocksource_failover kernel parameter.
RAM of my pc is 8 Gig but I even tried with 16 Gig RAM and still has problem. How can I Classify this amount for tweets without any problem?
Which OS are you running? Which python distribution? Try to install cython and/or using scikit-learn directly. Have a look at scikit-learn optimization techniques
I am trying to use the textcleaner script for cleaning up real life images that I am using with OCR. The issue I am having is that the images sent to me are rather large sometimes. (3.5mb - 5mb 12MP pics) The command I run with textcleaner ( textcleaner -g -e none -f <int # 10 - 100> -o 5 result1.jpg out1.jpg ) takes about 10 seconds at -f 10 and minutes or more on a -f 100.
To get around this I tried using ImageMagick to compress the image so it was much smaller. Using convert -strip -interlace Plane -gaussian-blur 0.05 -quality 50% main.jpg result1.jpg I was able to take a 3.5mb and convert it almost loss-lessly to ~400kb. However when I run textcleaner on this new file it STILL acts like its a 3.5mb file. (Times are almost exactly the same). I have tested these textcleaner settings against a file NOT compressed #400kb and it is almost instant while -f 100 takes about 12 seconds.
I am about out of ideas. I would like to follow the example here as I am in almost exactly the same situation. However, at the current speed of transformation an entire OCR process could take over 10 minutes when I need this to be around 30 seconds.