Freeze header in pandas dataframe - python-2.7

Is there a way by which I can freeze Pandas data frame header { as we do in excel}.So if its a long dataframe with multiple rows we can see the headers once we scroll down!! I am assuming ipython notebook

This function may do the trick:
from ipywidgets import interact, IntSlider
from IPython.display import display
def freeze_header(df, num_rows=30, num_columns=10, step_rows=1,
step_columns=1):
"""
Freeze the headers (column and index names) of a Pandas DataFrame. A widget
enables to slide through the rows and columns.
Parameters
----------
df : Pandas DataFrame
DataFrame to display
num_rows : int, optional
Number of rows to display
num_columns : int, optional
Number of columns to display
step_rows : int, optional
Step in the rows
step_columns : int, optional
Step in the columns
Returns
-------
Displays the DataFrame with the widget
"""
#interact(last_row=IntSlider(min=min(num_rows, df.shape[0]),
max=df.shape[0],
step=step_rows,
description='rows',
readout=False,
disabled=False,
continuous_update=True,
orientation='horizontal',
slider_color='purple'),
last_column=IntSlider(min=min(num_columns, df.shape[1]),
max=df.shape[1],
step=step_columns,
description='columns',
readout=False,
disabled=False,
continuous_update=True,
orientation='horizontal',
slider_color='purple'))
def _freeze_header(last_row, last_column):
display(df.iloc[max(0, last_row-num_rows):last_row,
max(0, last_column-num_columns):last_column])
Test it with:
import pandas as pd
df = pd.DataFrame(pd.np.random.RandomState(seed=0).randint(low=0,
high=100,
size=[200, 50]))
freeze_header(df=df, num_rows=10)
It results in (the colors were customized in the ~/.jupyter/custom/custom.css file):

Old question but wanted to revisit it because I recently found a solution. Use the qgrid module: https://github.com/quantopian/qgrid
This will not only allow you to scroll with the headers frozen but also sort, filter, edit inline and some other stuff. Very helpful.

Try panda's Sticky Headers:
import pandas as pd
import numpy as np
bigdf = pd.DataFrame(np.random.randn(16, 100))
bigdf.style.set_sticky(axis="index")
(this feature was introduced lately, I found it working on pandas 1.3.1, but not on 1.2.4)

A solution that would work on any editor is to select what rows you want to look at:
df.ix[100:110] # would show you from row 101 to 110 keeping the header on top

Related

Adding constant values at the begining of a dataframe in pyspark

I am trying to read a CSV file from HDFS location and to that 3 columns batchid,load timestamp and a delete indicator needs to be added at the beginning. I am using spark 2.3.2 and python 2.7.5. Sample values for 3 columns to be added is given below.
batchid- YYYYMMdd (int)
Load timestamp - current timestamp (timestamp)
delete indicator - blank (string)
Your question is a little bit obscure. You can do something in this flavor. First, create your timestamp using python functionalities :
import time
import datetime
timestamp = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
Then, assuming you use the DataFrame API, you plug that into your column :
import pyspark.sql.functions as psf
df = (df
.withColumn('time',
psf.unix_timestamp(
psf.lit(timestamp),'yyyy-MM-dd HH:mm:ss'
).cast("timestamp")
)
.withColumn('batchid', psf.date_format('time', 'yyyyMMdd/yyy'))
.withColumn('delete', psf.lit(''))
To reorder your columns:
df = df.select(*["time","batchid","delete"] + [k for k in colnames if k not in ["time","batchid","delete"]])

How to generate unigram, bigram and trigram from a large csv file and count their frequencies using nltk or pure python

I used this code and its generating unigrams, bigrams,trigrams from the given text. But i want to extract unigram,bigram and trigram from a specific coumn of a large csv file. Kindly help me how should i proceed
Firstly, some fancy code to produce the DataFrame.
from io import StringIO
import pandas as pd
sio = StringIO("""I am just going to type up something because you inserted an image instead ctr+c and ctr+v the code to Stackoverflow.
Actually, it's unclear what you want to do with the ngram counts.
Perhaps, it might be better to use the `nltk.everygrams()` if you want a global count.
And if you're going to build some sort of ngram language model, then it might not be efficient to do it as you have done too.""")
with sio as fin:
texts = [line for line in fin]
df = pd.DataFrame({'text': texts})
Then you can easily use DataFrame.apply to extract the ngrams, e.g.
from collections import Counter
from functools import partial
from nltk import ngrams, word_tokenize
for i in range(1, 4):
_ngrams = partial(ngrams, n=i)
df['{}-grams'.format(i)] = df['text'].apply(lambda x: Counter(_ngrams(word_tokenize(x))))

How to create a bag of words from csv file in python?

I am new to python. I have a csv file which has cleaned tweets. I want to create a bag of words of these tweets.
I have the following code but its not working correctly.
import pandas as pd
from sklearn import svm
from sklearn.feature_extraction.text import CountVectorizer
data = pd.read_csv(open("Twidb11.csv"), sep=' ')
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(data.Text)
count_vect.vocabulary_
Error:
.ParserError: Error tokenizing data. C error: Expected 19 fields in
line 5, saw 22
It's duplicated i think. U can see answer here. There are a lot of answers and comments.
So, solution can be:
data = pd.read_csv('Twidb11.csv', error_bad_lines=False)
Or:
df = pandas.read_csv(fileName, sep='delimiter', header=None)
"In the code above, sep defines your delimiter and header=None tells pandas that your source data has no row for headers / column titles. Thus saith the docs: "If file contains no header row, then you should explicitly pass header=None". In this instance, pandas automatically creates whole-number indeces for each field {0,1,2,...}."

How can i extract data from the first column of data frame and insert data in other columns?

I have a trouble with data frame. I have a csv file with ten columns, but all the data stores in the first column. How can i automatically extract data from the first column and put into other columns? Could you help me, please. enter image description here
This is my code:
import pandas as pd
import numpy as np
df = pd.read_csv('test_dataset.csv')
df.head(3)
one_column = df.iloc[:,0]
one_column.head(3)
This is the link for download file:
enter link description here
You can use parameter quoting=3 for no quoting in read_csv:
df = pd.read_csv('test_dataset.csv', quoting=3)
quoting : int or csv.QUOTE_* instance, default 0
Control field quoting behavior per csv.QUOTE_* constants. Use one of QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).

NLTK applied to dataframes , how to iterate through list

Apologies in advance as this is my first question. I am using nltk to tokenize a series of tweets from a csv that I have loaded into a df. The tokenization works fine and outputs something like this [[My, uncle, ...]] into a cell in a df. I want to then apply a POS tagger to the tokenized text for the whole column of the df. I use the code below to do it. The line I am having difficulty with is df['tagged'] = df['tokenized'].apply(lambda row: [nltk.pos_tag(row) for item in row]). I know that I am iterating on the wrong element (row versus item) but can't figure out the correct way to do it. The code is below:
import pandas as pd
import numpy as np
import nltk
from nltk.tokenize import word_tokenize,wordpunct_tokenize
from nltk.tag import pos_tag
read_test = pd.read_csv("simontwittertest.csv")
df = read_test
df['tokenized'] = df['content'].apply(lambda row: [nltk.wordpunct_tokenize(row) for item in row])
df['tagged'] = df['tokenized'].apply(lambda row: [nltk.pos_tag(row) for item in row])
print(df['tagged'])`
Out of interest I found a small bug with pos_tag which only works with NLTK 3.1 not NLTK 3.2 (at least with Python 2.7)
Many Thanks`
If you are applying a lambda function to a row, you need to specify axis=1:
df['tokenized'] = df['content'].apply(
lambda row: [nltk.wordpunct_tokenize(row) for item in row], axis=1)
df['tagged'] = df['tokenized'].apply(
lambda row: [nltk.pos_tag(row) for item in row], axis=1)