decoder jpeg not available: django with apache2 - django

when I try to use a simple code
time_created = time.time()
tempPath = 'user_image/'+ str(request.user.id)+'/'+str(time_created)+'/'
print tempPath
path = default_storage.save(tempPath+'original.jpg', ContentFile(content_image.read()))
tmp_file = os.path.join(settings.MEDIA_ROOT, path) #this line gives error
image = open(tmp_file)
it gives me error : decoder jpeg not available
this is what I did to resolve it:
http://www.answermysearches.com/fixing-pil-ioerror-decoder-jpeg-not-available/320/
I am using python2.7
and Imaging-1.1.7
after following the above link, when i run python selftest.py on terminal, i get following output
python selftest.py
--------------------------------------------------------------------
PIL 1.1.7 TEST SUMMARY
--------------------------------------------------------------------
Python modules loaded from ./PIL
Binary modules loaded from ./PIL
--------------------------------------------------------------------
--- PIL CORE support ok
*** TKINTER support not installed
--- JPEG support ok
--- ZLIB (PNG/ZIP) support ok
--- FREETYPE2 support ok
*** LITTLECMS support not installed
--------------------------------------------------------------------
Running selftest:
--- 57 tests passed.
But when I access my application from browser, I still get decoder jpeg not available
Note: I restarted apache server (not sure if its required).
Do I need to do some config changes in Apache?
I searched on stackoverflow, and found similar questions, but none was dealing with apache.

I have had the same error. Reason was in other site run on server. It had old PIL in python path. So, cleaninig up python path or reinstallation of old PIL maybe can help you.

Related

Error while knitting Rmarkdown with reticulate: OSError: libomp.dylib not found

I'm trying to knit an Rmarkdown with reticulate for a chunk of python code.
Knitting throws out an error saying libomp.dylib not found, but the code runs fine as a chunk (not knitted).
Machine: Macbook M1 Max
OS: Ventura 13.0.1
I used below packages for python with reticulate.
library(reticulate)
use_python("/Users/ramanujam/opt/anaconda3/bin/python")
from lazypredict.Supervised import LazyClassifier
from sklearn.model_selection import train_test_split
I'm Getting the error below.
However, code executes fine as a chunk.
Error in py_call_impl(callable, dots$args, dots$keywords) :
OSError: dlopen(/Users/xxx/opt/anaconda3/lib/python3.9/site-packages/lightgbm/lib_lightgbm.so, 0x0006): Library not loaded: /usr/local/opt/libomp/lib/libomp.dylib
Referenced from: <8DF2AF67-B85F-3F67-B687-E50A514307EC> /Users/xxx/opt/anaconda3/lib/python3.9/site-packages/lightgbm/lib_lightgbm.so
Reason: tried: '/usr/local/opt/libomp/lib/libomp.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/local/opt/libomp/lib/libomp.dylib' (no such file), '/usr/local/opt/libomp/lib/libomp.dylib' (no such file), '/Library/Frameworks/R.framework/Resources/lib/libomp.dylib' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')), '/Library/Java/JavaVirtualMachines/jdk1.8.0_241.jdk/Contents/Home/jre/lib/server/libomp.dylib' (no such file)
I underestand this is a dependency on libomp.
I already tried installing libomp with
brew install libomp
Tried creating a symlink to R.framework path from the brew install location but there is a conflict of architecture there.
Since the code executes in chunk but has trouble during knitting, I'm guessing the libomp source used is different there. But I'm not clear on how or why Rstudio would have a different source while running in chunks vs. knitting as I have specified the python path to be used.
Any help would be appreciated. Thanks in advance!

New Errors with Rmarkdown and pandoc-2.5 & 2.6 (cannot decode byte \xa9)

I recently uploaded a new version of an R package in which the R markdown vignettes work well enough on my Ubuntu system with pandoc 2.2.
Today I was notified by R CRAN checking of the following
This version fails on both Fedora Linux and macOS with pandoc 2.5
--- re-building ‘Rmarkdown.Rmd’ using rmarkdown
pandoc: Cannot decode byte '\xa9':
Data.Text.Internal.Encoding.streamDecodeUtf8With: Invalid UTF-8 stream
Error: processing vignette 'Rmarkdown.Rmd' failed with diagnostics:
pandoc document conversion failed with error 1
--- failed re-building ‘Rmarkdown.Rmd’
--- re-building ‘code_chunks.Rmd’ using rmarkdown
convert: profile 'icc': 'RGB ': RGB color space not permitted on
grayscale PNG `tmpout/p-chunk65-1.png' #
warning/png.c/MagickPNGWarningHandler/1672.
pandoc: Cannot decode byte '\xa9':
Data.Text.Internal.Encoding.streamDecodeUtf8With: Invalid UTF-8 stream
Error: processing vignette 'code_chunks.Rmd' failed with diagnostics:
pandoc document conversion failed with error 1
--- failed re-building ‘code_chunks.Rmd’
\xa9 is a Latin-1 copyright sign. The PNG error is seen only on macOS.
Unfortunately knitr/pandoc produce no debugging information, so this is
all I know.
It appears to me that the error about \xa9 is a wild goose chase.
The Pandoc instructions have changed. Replacing this old stanza
header-includes:
- \usepackage{xcolor}
- \usepackage{fancybox}
- \usepackage{calc}
- \usepackage{subfig}
With this new one solved the problem.
header-includes:
- |
```{=latex}
\usepackage{xcolor}
\usepackage{fancybox}
\usepackage{calc}
\usepackage{subfig}
```
After that, I do get a success with Pandoc 2.6.
At first I thought that I understood the problem, but then it happened again and I re-typed the new stanza entirely and Pandoc does not give the error anymore. So I am nonplussed.
I have not yet found an answer for the PNG problem on Macintosh.

Read after Videocapture on opencv always returns false

I cant figure out what the problem is.
I am using
Ubuntu 17.04
Python 2.7.13
OpenCV Version: 3.3.0
I have gone through all the related problems on internet but have not got the solution for the problem yet.
'v.mp4' file is in the same directory in which my python file is present.
CODE
import cv2
vidcap = cv2.VideoCapture('v.mp4')
success,image = vidcap.read()
count = 0;
print success
while success:
success,image = vidcap.read()
cv2.imwrite("frame%d.jpg" % count, image) # save frame as JPEG file
if cv2.waitKey(10) == 27: # exit if Escape is hit
break
count += 1
The opencv-python package does not have VideoCapture() support outside of Windows. See my answer here or the PyPI opencv-python documentation, which states:
IMPORTANT NOTE
MacOS and Linux packages do not support video related functionality (not compiled with FFmpeg).
For me, I used pycharm as my ide. Every isopened() and read() showed up false. All I had to do to was change the python version that pycharm used to python 2.

How to start a machine learning course of Udacity on Anaconda Jupyter notebook and Python 2.7?

I want to start a machine learning course of udacity. So I downloaded ud120-projects-master.zip file and extracted it in my downloads folder. I have installed anaconda jupyter notebook (python 2.7).
First mini project is Naïve-Bayes ,so I opened the jupyter notebook and the %load nb_author_id.py to convert into .ipynb
But I think I have to first run the startup.py in tools folder to extract the data.
So I ran the startup.ipynb.
# %load startup.py
print
print "checking for nltk"
try:
import nltk
except ImportError:
print "you should install nltk before continuing"
print "checking for numpy"
try:
import numpy
except ImportError:
print "you should install numpy before continuing"
print "checking for scipy"
try:
import scipy
except:
print "you should install scipy before continuing"
print "checking for sklearn"
try:
import sklearn
except:
print "you should install sklearn before continuing"
print
print "downloading the Enron dataset (this may take a while)"
print "to check on progress, you can cd up one level, then execute <ls -lthr>"
print "Enron dataset should be last item on the list, along with its current size"
print "download will complete at about 423 MB"
import urllib
url = "https://www.cs.cmu.edu/~./enron/enron_mail_20150507.tgz"
urllib.urlretrieve(url, filename="../enron_mail_20150507.tgz")
print "download complete!"
print
print "unzipping Enron dataset (this may take a while)"
import tarfile
import os
os.chdir("..")
tfile = tarfile.open("enron_mail_20150507.tgz", "r:gz")
tfile.extractall(".")
print "you're ready to go!"
But getting an error....
checking for nltk
checking for numpy
checking for scipy
checking for sklearn
downloading the Enron dataset (this may take a while)
to check on progress, you can cd up one level, then execute <ls -lthr>
Enron dataset should be last item on the list, along with its current size
download will complete at about 423 MB
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
<ipython-input-1-c30fe1ced56a> in <module>()
32 import urllib
33 url = "https://www.cs.cmu.edu/~./enron/enron_mail_20150507.tgz"
---> 34 urllib.urlretrieve(url, filename="../enron_mail_20150507.tgz")
35 print "download complete!"
36
This is for nb_author_id.py :
# %load nb_author_id.py
#!/usr/bin/python
"""
This is the code to accompany the Lesson 1 (Naive Bayes) mini-project.
Use a Naive Bayes Classifier to identify emails by their authors
authors and labels:
Sara has label 0
Chris has label 1
"""
import sys
from time import time
sys.path.append("../tools/")
from email_preprocess import preprocess
### features_train and features_test are the features for the training
### and testing datasets, respectively
### labels_train and labels_test are the corresponding item labels
features_train, features_test, labels_train, labels_test = preprocess()
#########################################################
### your code goes here ###
#########################################################
error/warning
C:\Users\jr31964\AppData\Local\Continuum\Anaconda2\lib\site-packages\sklearn\cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
no. of Chris training emails: 7936
no. of Sara training emails: 7884
How to I start with Naïve Bayes mini project and what are the prerequisites action needed.
Since the course is I presume in Python 3, I would suggest making a conda environment in python 3. You can do this even though you have a base python installation of python 2. This should save you converting all the course code in python 3 to your python 2.
conda create --name UdacityCourseEnvironment python=3.6
# to get into your new environment (mac/linux)
source activate UdacityCourseEnvironment
# to get into your new environment (windows)
activate UdacityCourseEnvironment
# When you need new packages inside your new environment
conda install nameOfPackage
Source: Switching between python 2 and 3 with Conda
You made the right decision to go with Anaconda - this solves a bunch of incompatibility issues between Python 2 and Python 3 and the various package dependencies. I did it the hard way and am converting the code to Python3 (& dependencies) as I go along, because I want an up-to-date environment & programming skills when I finish; but that's just me.
Obviously, you can ignore that deprecation warning: sklearn 0.19.0 still works. Anyone who tries to run this after 0.20.0 will have an issue. But, if you find it annoying (like me) you can edit the file tools/email_preprocess.py and change the following lines (original in comments):
# from sklearn import cross_validation
from sklearn.model_selection import train_test_split
and
#features_train, features_test, labels_train, labels_test = cross_validation.train_test_split(word_data, authors, test_size=0.1, random_state=42)
features_train, features_test, labels_train, labels_test = train_test_split(word_data, authors, test_size=0.1, random_state=42)
Also, because some installs are dependent on others. An earlier successful install (e.g. numpy) turns out to cause a failure of the install of other packages (e.g. scipy) because a prereq for that is numpy+mkl. If you just installed numpy, that needs to be uninstalled and replaced. See more on that at (I have hit my link limit) https colon //github dot com/scipy/scipy/issues/7221
The next problem I hit was that, on my machine, the volume of the email files in enron_mail_20150507.tgz was so large that it ran for several hours without reaching the completion message:
print "you're ready to go!"
Turns out that my IDE (PyCharm) was indexing the files as they were being unpacked and this was killing the disk. As indexing text files is unnecessary I turned that off for the directory 'maildir'. That allowed startup.py to finish.
The error you are encountering with urllib is due to a change in the package: you need to change the import statement to:
import urllib.request
...and then your line 34 (error message above) to:
urllib.request.urlretrieve(url, filename="../enron_mail_20150507.tar.gz")
Note also this link on github is very helpful: https://github.com/MLTO/general/wiki/Python-Setup-for-Udacity-ud120-course
The rest of this answer relates to Windows 10, so Linux users can skip this.
The next problem I encountered was that some of the package imports were failing, due to the installs not being correctly optimized for W10. An invaluable resource to resolve this is a set of Windows optimized .whl (wheel) files that can be found at http://www.lfd.uci.edu/~gohlke/pythonlibs/
Next problem was the unpacking of the .tgz file introduced the probably familiar LF/CRLF character issues between Linux and Windows files. There is a fix for this from #monkshow92 on github here: (link limit again) https colon //github dot com/udacity/ud120-projects/issues/46
Apart from that, it was a breeze....

Jpegs in Django-wiki

I'm trying to get django-wiki running.
It works well so far, except I can't display .jpeg images.
At first I had trouble when only importing jpeg files in the webapp.
I fixed this modifying setup.py of PIL's setup.py as follows:
JPEG_ROOT = libinclude("/usr/lib")
# Line 214
add_directory(library_dirs, "/usr/lib")
add_directory(library_dirs, "/usr/lib/x86_64-linux-gnu")
Jpeg libs I have currently installed:
libjpeg-progs
libjpeg62:amd64
libjpeg62-dev:amd64
libjpeg8:amd64
libopenjpeg2:amd64
After install PIL with pip install PIL, I get this output which doesn't look that bad, at least I thought so
*** TKINTER support not available
--- JPEG support available
--- ZLIB (PNG/ZIP) support available
*** FREETYPE2 support not available
*** LITTLECMS support not available
No error messages (and no "decoder ntót available") and I can view the images properly on my server, which means upload works great. But in the wiki only the file names are shown and when I click on them I get
"This image failed to load."
Could someone please help me? I can't find any error output (debug mode is activated).
Thanks in advance
You are compiling software! You need to install development libraries for these things to compile, e.g. apt-get install libjpeg-dev.
Also, install Pillow, it has less chances of failure to compile - pip install pillow.