Subtract a vector to each row of a dataframe - python-2.7

My dataframe looks like this :
fruits = pd.DataFrame({'orange': [10, 20], 'apple': [30, 40], 'banana': [50, 60]})
apple banana orange
0 30 50 10
1 40 60 20
And I have this vector (its also a dataframe)
sold = pd.DataFrame({'orange': [1], 'apple': [2], 'banana': [3]})
apple banana orange
0 2 3 1
I want to subtract this vector to each row of the initial dataframe to obtain a dataframe which looks like this
apple banana orange
0 28.0 47.0 9.0
1 38.0 57.0 19.0
I tried :
print fruits.subtract(sold, axis = 0)
And the output is
apple banana orange
0 28.0 47.0 9.0
1 NaN NaN NaN
It worked only for the first line. I could create a dataframe filled with the vector for each row. Is there a more efficient way to subtract this vector ? I don't want to use a loop.

convert the df to a series using squeeze and pass axis=1:
In [6]:
fruits.sub(sold.squeeze(), axis=1)
Out[6]:
apple banana orange
0 28 47 9
1 38 57 19
The conversion is necessary as by design arithmetic operations between dfs will align on indices and columns, by passing a Series this allows you to subtract from each row in the df the row from the other df.

Try:
fruits.sub(sold.iloc[0, :])
What you tried before didn't work because sold is a dataframe and the subtraction will try to align both columns and index. sold.iloc[0, :] gets at the first row and is a series thus will work as you intended.

Related

Pandas and reg ex, decompoising text and numbers into several columns with headings

I have a dataframe with a column containing:
1 Tile 1 up Red 2146 (75) Green 1671 (75)
The numbers 1 can be upto 10
up can be also be down
The 2146 and 1671 can be any digit upto 9999
Whats the best way to break out each of these into separate columns without using split. I was looking at regex but not sure how to handle this (especially the white spaces). I liked the idea of putting the new column names in too and started with
Pixel.str.extract(r'(?P<num1>\d)(?P<text>[Tile])(?P<Tile>\d)')
Thanks for any help
To avoid an overly complicated regex pattern, perhaps you can use str.extractall to get all numbers, and then concat to your current df. For up or down, use str.findall:
df = pd.DataFrame({"title":["1 Tile 1 up Red 2146 (75) Green 1671 (75)",
"10 Tile 10 down Red 9999 (75) Green 9999 (75)"]})
df = pd.concat([df, df["title"].str.extractall(r'(\d+)').unstack().loc[:,0]], axis=1)
df["direction"] = df["title"].str.findall(r"\bup\b|\bdown\b").str[0]
print (df)
#
title 0 1 2 3 4 5 direction
0 1 Tile 1 up Red 2146 (75) Green 1671 (75) 1 1 2146 75 1671 75 up
1 10 Tile 10 down Red 9999 (75) Green 9999 (75) 10 10 9999 75 9999 75 down

Google Sheet Function with IF statement to add 1/0 column

I want to query a number of rows from one sheet into another sheet, and to the right of this row add a column based on one of the queried columns. Meaning that if column C is "Il", I want to add a column to show 0, otherwise 1 (the samples below will make it clearer.
I have tried doing this with Query and Arrayformula, without query, with Filter and importrange. An example of what I tried:
=query(Data!A1:AG,"Select D, E, J, E-J, Q, AG " & IF(AG="Il",0, 1),1)
Raw data sample:
Captured Amount Fee Country
TRUE 336 10.04 NZ
TRUE 37 1.37 GB
TRUE 150 4.65 US
TRUE 45 1.61 US
TRUE 20 0.88 IL
What I would want as a result:
Amount Fee Country Sort
336 10.04 NZ 1
37 1.37 GB 1
150 4.65 US 1
45 1.61 US 1
20 0.88 IL 0
try it like this:
=ARRAYFORMULA(QUERY({Data!A1:Q, {"Sort"; IF(Data!AG2:AG="IL", 0, 1)}},
"select Col4,Col5,Col9,Col5-Col9,Col17,Col18 label Col5-Col9''", 1))

Issue Calculating Mean of Grouped Data for entire range of dataset using Pandas

I have a data set of daily temperatures for which I want to calculate 20 year means. The data look like this:
1974 1 1 5.3 4.6 7.3 3.4
1974 1 2 3.3 7.2 4.5 6.5
...
2005 12 364 4.2 5.2 3.3 4.6
2005 12 365 3.1 5.5 2.6 6.8
There is no header in the file but the first column contains the year, the second column the month, and the third column the day of the year. The rest of the columns are temperature data.
I want to calculate the average temperature for each day over a period of 20 years. I thought the best way to do that would be to group the data by day and calculate the mean of each day for a specific range of years. Here is my code:
import pandas as pd
hist_fn = 'tmean_daily_1974_2005.txt'
twenty_year_fn = '20_yr_mean_1974_1993.txt'
start = 1974
end = 1993
hist_mean = pd.read_csv(hist_fn, sep='\s+', header=None)
# Limit dataframe to only the 20 years for which I want the mean calculated
interval_mean = hist_mean[(hist_mean[0]>=start) & (hist_mean[0]<=end)]
# Rename the first column to reflect what mean this file is displaying
interval_mean.iloc[:, 0] = ("%s-%s" % (start, end))
# Generate mean for each day spread across all the years in the dataframe
interval_mean.iloc[:, 3:] = interval_mean.groupby(2, as_index=False).mean().iloc[:, 2:]
# Write multiyear mean to txt
interval_mean.to_csv(twenty_year_fn, sep='\t', header=False, index=False)
The data set spans longer than 20 years and the method I used has worked for the first 20 year interval but gives me a (mostly) empty text file for any other set of years entered.
So when I use these inputs it works:
start = 1974
end = 1993
and it produces a file that looks like this:
1974-1993 1 1 4.33 5.25 6.84 3.67
1974-1993 1 2 7.23 6.22 5.65 6.23
...
1974-1993 12 364 5.12 4.34 5.21 2.16
1974-1993 12 365 4.81 5.95 3.56 6.78
but when I change the inputs to this:
start = 1975
end = 1994
it produces a .txt file with no temperatures:
1975-1994 1 1
1975-1994 1 2
...
1975-1994 12 364
1975-1994 12 365
I don't understand why this method works for the first 20 year interval but none of the subsequent intervals. Is it something to do with the way the data is organized or how it is being sliced?
Now when that's out of the way, we can talk about the problem you presented:
The strange behavior is due to the fact that pandas matches indices on assignment, and slicing preserves the original indices. That means that when setting
interval_mean.iloc[:, 3:] = interval_mean.groupby(2, as_index=False).mean().iloc[:, 2:]
Note that interval_mean.groupby(2, as_index=False).mean() has indices 0, ... , 30 (since as_index=False makes the groupby operation create new indices. Otherwise, it would have been the day number).On the other had, interval_mean has the original indices from hist_mean, meaning the first time (first 20 years) it has the indices 0, ..., ~20*365 and the second time is has indices starting from arround 20*365 and counting up.
This is a bit confusing at first, but pandas offer great documentation about it, and people quickly discover why it is so useful.
I'll to explain what happens with an example:
Assume we have the following DataFrame:
df = pd.DataFrame(np.reshape(np.random.randint(5, size=30), [-1,3]))
df
0 1 2
0 1 1 2
1 2 1 1
2 0 1 2
3 0 2 0
4 2 1 0
5 0 1 2
6 2 2 1
7 1 0 2
8 0 1 0
9 1 2 0
Note that the column names are 0,1,2 and the row names (the index) are 0, ..., 9.
When we preform groupby we obtain
df.groupby(0, as_index=False).mean()
0 1 2
0 0 1.250000 1.000000
1 1 1.000000 1.333333
2 2 1.333333 0.666667
(The index equals to the columns grouped by just because draw numbers between 0 to 2). Now, when will do assignments to df.loc, it will replace every cell by the corresponding cell in the assignee, if such cell exists. Otherwise, it will leave NA.
df.loc[:,:] = df.groupby(0, as_index=False).mean()
df
0 1 2
0 0.0 1.250000 1.000000
1 1.0 1.000000 1.333333
2 2.0 1.333333 0.666667
3 NaN NaN NaN
4 NaN NaN NaN
5 NaN NaN NaN
6 NaN NaN NaN
7 NaN NaN NaN
8 NaN NaN NaN
9 NaN NaN NaN
And when you write NA to csv, it leaves the cell blank.
The last piece of the puzzle is how interval_mean preserved the original indices, but this is because slicing preserves the original indices:
df[df[1] > 1]
0 1 2
3 0 2 0
6 2 2 1
9 1 2 0

multiply pandas dataframe column with a constant

I have two dataframes:
df:
Conference Year SampleCitations Percent
0 CIKM 1995 373 0.027153
1 CIKM 1996 242 0.017617
2 CIKM 1997 314 0.022858
3 CIKM 1998 427 0.031084
And another dataframe which returns to me the total number of citations:
allcitations= pd.read_sql("Select Sum(Citations) as ActualCitations from publications "
I want to simply multiply the Percent column in dataframe df with the constant value ActualCitations.
I tried the following:
df['ActualCitations']=df['Percent'].multiply(allcitations['ActualCitations'])
and
df['ActualCitations']=df['Percent']* allcitations['ActualCitations']
But both only perform it for the first row and the rest is Naan, as shown below:
Conference Year SampleCitations Percent ActualCitations
0 CIKM 1995 373 0.027153 1485.374682
1 CIKM 1996 242 0.017617 NaN
2 CIKM 1997 314 0.022858 NaN
3 CIKM 1998 427 0.031084 NaN
The problem in this case is pandas's auto alignment (ususally a good thing). Because your 'constant' is actually in a dataframe, what pandas will try to do is create row 0 from each of the row 0s and then row 1 from each of the row 1s, but there is no row 1 in the second dataset, so you get NaN from there forward.
So what you need to do intentionally break the dataframe aspect of the second dataframe so that pandas will then 'broadcast' the constant to ALL rows. One way to do this is with values, which in this case essentially just drops the index from a dataframe so that it becomes a numpy array with one element (really a scalar, but contained in a numpy array technically). to_list() will also accomplish the same thing.
allcitations=pd.DataFrame({ 'ActualCitations':[54703.888410120424] })
df['Percent'] * allcitations['ActualCitations'].values
0 1485.374682
1 963.718402
2 1250.421481
3 1700.415667

How to append a new column to my Pandas DataFrame based on a row-based calculation?

Let's say I have a Pandas DataFrame with two columns: 1) user_id, 2) steps (which contains the number of steps on the given date). Now I want to calculate the difference between the number of steps and the number of steps in the preceding measurement (measurements are guaranteed to be in order within my DataFrame).
So basically this comes down to appending an extra column to my DataFrame where the row values of this data frame match the value of the column 'steps' within this same row, minus the value of the 'steps' column in the row above (or 0 if this is the first row). To complicate things further, I want to calculate these differences per user_id, so I want to make sure that I do not subtract the steps values of two rows with different user_id's.
Does anyone have an idea how to get this done with Python 2.7 and Panda?
So an example to illustrate this.
Example input:
user_id steps
1015 48
1015 23
1015 79
1016 10
1016 20
Desired output:
user_id steps d_steps
1015 48 0
1015 23 -25
1015 79 56
2023 10 0
2023 20 10
Your output shows user ids that are not in you orig data but the following does what you want, you will have to replace/fill the NaN values with 0:
In [16]:
df['d_steps'] = df.groupby('user_id').transform('diff')
df.fillna(0, inplace=True)
df
Out[16]:
user_id steps d_steps
0 1015 48 0
1 1015 23 -25
2 1015 79 56
3 1016 10 0
4 1016 20 10
Here we generate the desired column by calling transform on the groupby by object and pass a string which maps to the diff method which subtracts the previous row value. Transform applies a function and returns a series with an index aligned to the df.