How do i unit test python dataframes?
I have functions that have an input and output as dataframes. Almost every function I have does this. Now if i want to unit test this what is the best method of doing it? It seems a bit of an effort to create a new dataframe (with values populated) for every function?
Are there any materials you can refer me to? Should you write unit tests for these functions?
While Pandas' test functions are primarily used for internal testing, NumPy includes a very useful set of testing functions that are documented here: NumPy Test Support.
These functions compare NumPy arrays, but you can get the array that underlies a Pandas DataFrame using the values property. You can define a simple DataFrame and compare what your function returns to what you expect.
One technique you can use is to define one set of test data for a number of functions. That way, you can use Pytest Fixtures to define that DataFrame once, and use it in multiple tests.
In terms of resources, I found this article on Testing with NumPy and Pandas to be very useful. I also did a short presentation about data analysis testing at PyCon Canada 2016: Automate Your Data Analysis Testing.
you can use pandas testing functions:
It will give more flexbile to compare your result with computed result in different ways.
For example:
df1=pd.DataFrame({'a':[1,2,3,4,5]})
df2=pd.DataFrame({'a':[6,7,8,9,10]})
expected_res=pd.Series([7,9,11,13,15])
pd.testing.assert_series_equal((df1['a']+df2['a']),expected_res,check_names=False)
For more details refer this link
If you are using pytest, pandasSnapshot will be useful.
# use with pytest
import pandas as pd
from snapshottest_ext.dataframe import PandasSnapshot
def test_format(snapshot):
df = pd.DataFrame([['a', 'b'], ['c', 'd']],
columns=['col 1', 'col 2'])
snapshot.assert_match(PandasSnapshot(df))
One big cons is that the snapshot is not readable anymore. (store the content as csv is more readable, but it is problematic.
PS: I am the author of pytest snapshot extension.
I don't think it's hard to create small DataFrames for unit testing?
import pandas as pd
from nose.tools import assert_dict_equal
input_df = pd.DataFrame.from_dict({
'field_1': [some, values],
'field_2': [other, values]
})
expected = {
'result': [...]
}
assert_dict_equal(expected, my_func(input_df).to_dict(), "oops, there's a bug...")
You could use snapshottest and do something like this:
def test_something_works(snapshot): # snapshot is a pytest fixture from snapshottest
data_frame = calc_something_and_return_pandas_dataframe()
snapshot.assert_match(data_frame.to_csv(index=False), 'some_module_level_unique_name_for_the_snapshot')
This will create a snapshots folder with a file in that contains the csv output that you can update with --snapshot-update when your code changes.
It works by comparing the data_frame variable to what is saved to disk.
Might be worth mentioning that your snapshots should be checked in to source control.
I would suggest writing the values as CSV in docstrings (or separate files if they're large) and parsing them using pd.read_csv(). You can parse the expected output from CSV too, and compare, or else use df.to_csv() to write a CSV out and diff it.
Pandas has built in testing functions, but I don't find the output easy to parse, so I created an open source project called beavis with functions that output error messages that are easier for humans to read.
Here's an example of one of the built in testing methods:
df = pd.DataFrame({"col1": [1042, 2, 9, 6], "col2": [5, 2, 7, 6]})
pd.testing.assert_series_equal(df["col1"], df["col2"])
Here's the error message:
> ???
E AssertionError: Series are different
E
E Series values are different (50.0 %)
E [index]: [0, 1, 2, 3]
E [left]: [1042, 2, 9, 6]
E [right]: [5, 2, 7, 6]
Not very easy to see which rows are mismatched because the output isn't aligned.
Here's how you can write the same test with beavis.
import beavis
beavis.assert_pd_column_equality(df, "col1", "col2")
This'll give you the following readable error message:
The built-in assert_frame_equal doesn't give a readable error message either. Here's how you can compare DataFrame equality with beavis.
df1 = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
df2 = pd.DataFrame({'col1': [5, 2], 'col2': [3, 4]})
beavis.assert_pd_equality(df1, df2)
The frame-fixtures Python package (of which I am an author) is designed to make it easy to "create a new dataframe (with values populated)" for unit or performance tests.
For example, if you want to test against a DataFrame of floats and strings with a numerical index, you can use a compact string declaration to generate a DataFrame.
>>> ff.Fixture.to_frame('i(I,int)|v(float,str)|s(4,2)').to_pandas()
0 1
34715 1930.40 zaji
-3648 -1760.34 zJnC
91301 1857.34 zDdR
30205 1699.34 zuVU
>>> ff.Fixture.to_frame('i(I,int)|v(float,str)|s(8,3)').to_pandas()
0 1 2
34715 1930.40 zaji 694.30
-3648 -1760.34 zJnC -72.96
91301 1857.34 zDdR 1826.02
30205 1699.34 zuVU 604.10
54020 268.96 zKka 1080.40
129017 3511.58 zJXD 2580.34
35021 1175.36 zPAQ 700.42
166924 2925.68 zyps 3338.48
Related
I've been trying to get nlargest rows for a group by following method from this question. The solution to the question is correct up to a point.
In this example, I groupby column A and want to return the rows of C and D based on the top two values in B.
For some reason the index of grp_df is multilevel and includes both A and the original index of ddf.
I was hoping to simply reset_index() and drop the unwanted index and just keep A, but I get the following error:
ValueError: The columns in the computed data do not match the columns in the provided metadata
Here is a simple example reproducing the error:
import numpy as np
import dask.dataframe as dd
import pandas as pd
np.random.seed(42)
df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))
ddf = dd.from_pandas(df, npartitions=3)
grp_df = ddf.groupby('A')[['B','C']].apply(lambda x: x.nlargest(2, columns=['B']), meta={
"B": 'f8', "C": 'f8'})
# Print is successful and results are correct
print(grp_df.head())
grp_df = grp_df.reset_index()
# Print is unsuccessful and shows error below
print(grp_df.head())
Found approach for solution here.
Following code now allows for reset_index() to work and gets rid of the original ddf index. Still not sure why the original ddf index came through the groupby in the first place, though
meta = pd.DataFrame(columns=['B', 'C'], dtype=int, index=pd.MultiIndex([[], []], [[], []], names=['A', None]))
grp_df = ddf.groupby('A')[['B','C']].apply(lambda x: x.nlargest(2, columns=['B']), meta=meta)
grp_df = grp_df.reset_index().drop('level_1', axis=1)
Decision Tree supports multi label classification right? my y labels are of type [['brufen','amoxil'],['brufen'],['xanex']]. Now y labels can be of the type list of list of labels as mentioned in the sklearn documentation, so why does it gives me error of unknown label type?
This error is resolved in a way that the length of list should be consistent, but how else should I handle this problem apart from one hot encoding it?
You need to convert the labels to label-indicator format first. Then you can use them with decision trees.
For converting, you can use MultiLabelBinarizer.
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
y_converted = mlb.fit_transform([['brufen','amoxil'], ['brufen'], ['xanex']])
# Output: array([[1, 1, 0],
# [0, 1, 0],
# [0, 0, 1]])
mlb.classes_
# OutPut: array(['amoxil', 'brufen', 'xanex'], dtype=object)
Now use this y_converted instead of original y in decision tree.
Based on the information here: https://scikit-learn.org/stable/modules/multiclass.html#multioutputclassifier
You can use sklearn.multioutput.MultiOutputClassifier with a decision tree to get multi-label behavior. If I understand correctly, it works by internally creating a separate tree for each label.
I try to round all values in this dataframe. However, the pandas round() method explodes my dataframe by a factor 5! From 150 rows to 7518 rows.
It may be that there is something odd with the data in the dataframe, but then again, one would not expect a simple rounding function to do this.
Below, I replicate the error using 1) simulated data and 2) the data that leads to the said error.
This results in 150 rows, which is the correct number:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.random([150, 4]), columns=['A', 'B', 'C', 'D'])
df["cat"] = "MID"
df.loc[:399,["cat"]] = "LOW"
df.iloc[-400:,-1] = "HI"
df.cat.value_counts()
df.set_index("cat", inplace=True)
df.round(3)
Using the data from my dropbox folder, the round-function produces a whopping 7518 rows:
dfb = pd.read_pickle('dfna.pkl')
dfb.round(3)
This is strange. I solved it for now using this rather ugly line:
dfb = dfb.reset_index().round({'A': 3, 'B': 3, 'C': 3, 'D': 3}).set_index('tricile')
However, this is not ideal, given that pandas' round method acts in mysterious ways and may affect future programs.
I think it is bug - round with duplicated CategoricalIndex, so created pandas issue 21809 and pandas issue 21810.
Similar solution like your is:
print (dfb.reset_index().round(3).set_index('tricile'))
Or remove CategoricalIndex:
dfb.index = dfb.index.astype(str)
print (dfb.round(3))
I used this code and its generating unigrams, bigrams,trigrams from the given text. But i want to extract unigram,bigram and trigram from a specific coumn of a large csv file. Kindly help me how should i proceed
Firstly, some fancy code to produce the DataFrame.
from io import StringIO
import pandas as pd
sio = StringIO("""I am just going to type up something because you inserted an image instead ctr+c and ctr+v the code to Stackoverflow.
Actually, it's unclear what you want to do with the ngram counts.
Perhaps, it might be better to use the `nltk.everygrams()` if you want a global count.
And if you're going to build some sort of ngram language model, then it might not be efficient to do it as you have done too.""")
with sio as fin:
texts = [line for line in fin]
df = pd.DataFrame({'text': texts})
Then you can easily use DataFrame.apply to extract the ngrams, e.g.
from collections import Counter
from functools import partial
from nltk import ngrams, word_tokenize
for i in range(1, 4):
_ngrams = partial(ngrams, n=i)
df['{}-grams'.format(i)] = df['text'].apply(lambda x: Counter(_ngrams(word_tokenize(x))))
I am trying to use networkx to create a DiGraph. I want to use add_edges_from(), and I want the edges and their data to be generated from three tuples.
I am importing the data from a CSV file. I have three columns: one for ids (first set of nodes), one for a set of names (second set of nodes), and another for capacities (no headers in the file). So, I created a dictionary for the ids and capacities.
dictionary = dict(zip(id, capacity))
then I zipped the tuples containing the edges data:
List = zip(id, name, capacity)
but when I execute the next line, it gives me an assertion error.
G.add_edges_from(List, 'weight': 1)
Can someone help me with this problem? I have been trying for a week with no luck.
P.S. I'm a newbie in programming.
EDIT:
so, i found the following solution. I am honestly not sure how it works, but it did the job!
Here is the code:
import networkx as nx
import csv
G = nx.DiGraph()
capacity_dict = dict(zip(zip(id, name),capacity))
List = zip(id, name, capacity)
G.add_edges_from(capacity_dict, weight=1)
for u,v,d in List:
G[u][v]['capacity']=d
Now when I run:
G.edges(data=True)
The result will be:
[(2.0, 'First', {'capacity': 1.0, 'weight': 1}), (3.0, 'Second', {'capacity': 2.0, 'weight': 1})]
I am using the network simplex. Now, I am trying to find a way to make the output of the flowDict more understandable, because it is only showing the ids of the flow. (Maybe i'll try to input them in a database and return the whole row of data instead of using the ids only).
A few improvements on your version. (1) NetworkX algorithms assume that weight is 1 unless you specifically set it differently. Hence there is no need to set it explicitly in your case. (2) Using the generator allows the capacity attribute to be set explicitly and other attributes to also be set once per record. (3) The use of a generator to process each record as it comes through saves you having to iterate through the whole list twice. The performance improvement is probably negligible on small datasets but still it feels more elegant. Having said that -- your method clearly works!
import networkx as nx
import csv
# simulate a csv file.
# This makes a multi-line string behave as a file.
from StringIO import StringIO
filehandle = StringIO('''a,b,30
b,c,40
d,a,20
''')
# process each row in the file
# and generate an edge from each
def edge_generator(fh):
reader = csv.reader(fh)
for row in reader:
row[-1] = float(row[-1]) # convert capacity to float
# add other attributes to the dict() below as needed...
# e.g. you might add weights here as well.
yield (row[0],
row[1],
dict(capacity=row[2]))
# create the graph
G = nx.DiGraph()
G.add_edges_from(edge_generator(filehandle))
print G.edges(data=True)
Returns this:
[('a', 'b', {'capacity': 30.0}),
('b', 'c', {'capacity': 40.0}),
('d', 'a', {'capacity': 20.0})]