I have pyspark dataframe df containing paths to text files. I want to create a new column with the content of the text files.
import pyspark.sql.functions as F
from pyspark.sql.types import *
def read_file(filepath):
import s3fs
s3 = s3fs.S3FileSystem()
with s3.open(filepath) as f:
return f.read()
read_file_udf = F.udf(read_file, StringType())
df.withColumn('raw_text', read_file_udf('filepath')).show()
+---------------------+-----------+
| file | raw_text|
+---------------------+-----------+
|s3://bucket/file1.txt| [B#aa2a4f3|
|s3://bucket/file2.txt|[B#138664c5|
|s3://bucket/file3.txt| [B#3bcc67e|
|s3://bucket/file4.txt|[B#70b735c4|
|s3://bucket/file5.txt|[B#6fad821d|
+---------------------+-----------+
Instead of getting the actual file content, I am getting these strange [B# codes. What are they, why am I getting them and how do I fix this?
To answer my own question... I was getting [B# because the read_file() function was returning the byte representation of a string. Defining:
def read_file(filepath):
import s3fs
s3 = s3fs.S3FileSystem()
with s3.open(filepath) as f:
return f.read().decode("utf-8")
will fix the issue.
Following Pickle figures from matplotlib, I am trying to load a figure from a pickle. I am using the same code with the modifications that are suggested in the responses.
Saving script:
import numpy as np
import matplotlib.pyplot as plt
import pickle as pl
# Plot simple sinus function
fig_handle = plt.figure()
x = np.linspace(0,2*np.pi)
y = np.sin(x)
plt.plot(x,y)
# plt.show()
# Save figure handle to disk
pl.dump(fig_handle,file('sinus.pickle','wb'))
Loading script:
import matplotlib.pyplot as plt
import pickle as pl
import numpy as np
# Load figure from disk and display
fig_handle = pl.load(open('sinus.pickle', 'rb'))
fig_handle.show()
The saving script produces a file named "sinus.pickle" but the loading file does not show the anticipated figure. Any suggestions?
Python 2.7.13
matplotlib 2.0.0
numpy 1.12.1
p.s. following a suggestion replaced fig_handle.show() with pat.show() which produced an error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/
site-packages/matplotlib/backends/backend_macosx.py", line 109,
in_set_device_scale
self.figure.dpi = self.figure.dpi / self._device_scale * value
File "/usr/local/lib/python2.7/site-packages/matplotlib/figure.py",
line 416, in _set_dpi
self.callbacks.process('dpi_changed', self)
File "/usr/local/lib/python2.7/site-packages/matplotlib/cbook.py",
line 546, in process
if s in self.callbacks:
AttributeError: 'CallbackRegistry' object has no attribute 'callbacks'
What you call your "loading script" doesn't make any sense.
From the very link that you provided in your question, loading the picked figure is as simple as:
# Load figure from disk and display
fig_handle2 = pl.load(open('sinus.pickle','rb'))
fig_handle2.show()
Final solution included modification of
fig_handle.show()
to
plt.show()
and modification of the backend to "TkAgg", based to an advice given by ImportanceOfBeingErnest
I am pulling PNG images from Jupyter Notebooks and manage to display with IPython.display.Image but not with matplotib.pyplot.plt. What am I missing? I use python 2.7.
I am using the following algorithm:
To open the notebook JSON content I do:
import nbformat
notebook_ = nbformat.read(file_notebook, 4)
After retrieving the relevant cell information I pull the png information from it using:
def cell_to_image(cell, out_value_item_number=1):
if "execution_count" in cell.keys(): # i.e version >=4
return cell["outputs"][out_value_item_number]['data']['image/png']
elif "prompt_number" in cell.keys(): # i.e version < 4
return cell["outputs"][out_value_item_number]['png']
return None
cell_image = cell_to_image(cell)
The first few characters of cell_image (which is unicode) looks like:
iVBORw0KGgoAAAANSUhEUgAAA64AAAFMCAYAAADLFeHSAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\n
AAALEgAACxIB0t1+/AAAIABJREFUeJzs3Xd8jef/x/HXyTjZiYQkCGrU3ruR0tr9oq2qGtGo0dbe
\nm5pVlJpFUSMoVb6UoEZ/lCpatWuPUiNEEiMDmef3R75OexonJKUO3s/HI4/mXPd1X/d1f+LRR965
\n7/u6DSaTyYSIiIiIiIiIjbJ70hMQERERERERyYiCq4iIiIiIiNg0BVcRERERERGxaQquIiIiIiIi
\nYtMUXEVERERERMSmKbiKiIiIiIiITVNwFRGRxyIkJIRixYqxfv36+24/e/YsxYoVo3jx4v/yzGxb
\naGgoderUIS4uDoBdu3bRsmVLKlasyCuvvMKgQYOIjo622CcsLIyGDRtSunRp6tSpw8KFC62OW7p0
\naRo2bJju53Lnzh1GjRrFyy+/TNmyZWnRogW//fbbQ835q6++olGjRpQvX5769eszc+ZMkpOTzdtT
\nU1OZNGkSNWrUoHTp0jRp0oTdu3enGyc2NpZOn
I can easily plot in my Jupityer notebook using
from IPython.display import Image
Image(cell_image)
And now to my question:
How can I manipulate cell_image to be plt.subplot friendly?
(Assuming import matplotlib.pyplot as plt).
I realise that plt.imshow wouldn't work because this would require an array, which is not my case (which is a string, as far as I understand).
If you have your image string representation in a variable string_rep, the following code should work.
from io import BytesIO
import matplotlib.image as mpimage
import matplotlib.pyplot as plt
with BytesIO(string_rep.decode('base64')) as byte_rep:
image = mpimage.imread(byte_rep)
plt.imshow(image)
I'm trying to extract features from a text document. Here is my code:
import sklearn
from sklearn.datasets import load_files
from sklearn.feature_extraction.text import CountVectorizer
files = sklearn.datasets.load_files('/home/niyas/Documents/project/container', shuffle = False)
vectorizer = CountVectorizer(min_df=1)
X = vectorizer.fit_transform(files.data[1])
Y=vectorizer.get_feature_names()
I'm getting an error "ValueError: empty vocabulary; perhaps the documents only contain stop words". The code works fine when I pass a string with the exact same content of the text doc.
Help me. Thanks in advance.
I'm trying to get data from a zipped csv file. Is there a way to do this without unzipping the whole files? If not, how can I unzip the files and read them efficiently?
I used the zipfile module to import the ZIP directly to pandas dataframe.
Let's say the file name is "intfile" and it's in .zip named "THEZIPFILE":
import pandas as pd
import zipfile
zf = zipfile.ZipFile('C:/Users/Desktop/THEZIPFILE.zip')
df = pd.read_csv(zf.open('intfile.csv'))
If you aren't using Pandas it can be done entirely with the standard lib. Here is Python 3.7 code:
import csv
from io import TextIOWrapper
from zipfile import ZipFile
with ZipFile('yourfile.zip') as zf:
with zf.open('your_csv_inside_zip.csv', 'r') as infile:
reader = csv.reader(TextIOWrapper(infile, 'utf-8'))
for row in reader:
# process the CSV here
print(row)
A quick solution can be using below code!
import pandas as pd
#pandas support zip file reads
df = pd.read_csv("/path/to/file.csv.zip")
zipfile also supports the with statement.
So adding onto yaron's answer of using pandas:
with zipfile.ZipFile('file.zip') as zip:
with zip.open('file.csv') as myZip:
df = pd.read_csv(myZip)
Thought Yaron had the best answer but thought I would add a code that iterated through multiple files inside a zip folder. It will then append the results:
import os
import pandas as pd
import zipfile
curDir = os.getcwd()
zf = zipfile.ZipFile(curDir + '/targetfolder.zip')
text_files = zf.infolist()
list_ = []
print ("Uncompressing and reading data... ")
for text_file in text_files:
print(text_file.filename)
df = pd.read_csv(zf.open(text_file.filename)
# do df manipulations
list_.append(df)
df = pd.concat(list_)
Yes. You want the module 'zipfile'
You open the zip file itself with zipfile.ZipInfo([filename[, date_time]])
You can then use ZipFile.infolist() to enumerate each file within the zip, and extract it with ZipFile.open(name[, mode[, pwd]])
this is the simplest thing I always use.
import pandas as pd
df = pd.read_csv("Train.zip",compression='zip')
Supposing you are downloading a zip file that contains a CSV and you don't want to use temporary storage. Here is what a sample implementation looks like:
#!/usr/bin/env python3
from csv import DictReader
from io import TextIOWrapper, BytesIO
from zipfile import ZipFile
import requests
def all_tickers():
url = "https://simfin.com/api/bulk/bulk.php?dataset=industries&variant=null"
r = requests.get(url)
zip_ref = ZipFile(BytesIO(r.content))
for name in zip_ref.namelist():
print(name)
with zip_ref.open(name) as file_contents:
reader = DictReader(TextIOWrapper(file_contents, 'utf-8'), delimiter=';')
for item in reader:
print(item)
This takes care of all python3 bytes/str issues.
Modern Pandas since version 0.18.1 natively supports compressed csv files: its read_csv method has compression parameter : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'.
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
If you have a file name: my_big_file.csv and you zip it with the same name my_big_file.zip
you may simply do this:
df = pd.read_csv("my_big_file.zip")
Note: check your pandas version first (not applicable for older versions)