How To Interpret Least Square Means and Standard Error - lsmeans

I am trying to understand the results I got for a fake dataset. I have two independent variables, hours, type and response pain.
First question: How was 82.46721 calculated as the lsmeans for the first type?
Second question: Why is the standard error exactly the same (8.24003) for both types?
Third question: Why is the degrees of freedom 3 for both types?
data = data.frame(
type = c("A", "A", "A", "B", "B", "B"),
hours = c(60,72,61, 54,68,66),
# pain = c(85,95,69, 73, 29, 30)
pain = c(85,95,69, 85,95,69)
)
model = lm(pain ~ hours + type, data = data)
lsmeans(model, c("type", "hours"))
> data
type hours pain
1 A 60 85
2 A 72 95
3 A 61 69
4 B 54 85
5 B 68 95
6 B 66 69
> lsmeans(model, c("type", "hours"))
type hours lsmean SE df lower.CL upper.CL
A 63.5 82.46721 8.24003 3 56.24376 108.6907
B 63.5 83.53279 8.24003 3 57.30933 109.7562

Try this:
newdat <- data.frame(type = c("A", "B"), hours = c(63.5, 63.5))
predict(model, newdata = newdat)
An important thing to note here is that your model has hours as a continuous predictor, not a factor.

Related

Find sum of the column values based on some other column

I have a input file like this:
j,z,b,bsy,afj,upz,343,13,ruhwd
u,i,a,dvp,ibt,dxv,154,00,adsif
t,a,a,jqj,dtd,yxq,540,49,kxthz
j,z,b,bsy,afj,upz,343,13,ruhwd
u,i,a,dvp,ibt,dxv,154,00,adsif
t,a,a,jqj,dtd,yxq,540,49,kxthz
c,u,g,nfk,ekh,trc,085,83,xppnl
For every unique value of Column1, I need to find out the sum of column7
Similarly, for every unique value of Column2, I need to find out the sum of column7
Output for 1 should be like:
j,686
u,308
t,98
c,83
Output for 2 should be like:
z,686
i,308
a,98
u,83
I am fairly new in Python. How can I achieve the above?
This could be done using Python's Counter and csv library as follows:
from collections import Counter
import csv
c1 = Counter()
c2 = Counter()
with open('input.csv') as f_input:
for cols in csv.reader(f_input):
col7 = int(cols[6])
c1[cols[0]] += col7
c2[cols[1]] += col7
print "Column 1"
for value, count in c1.iteritems():
print '{},{}'.format(value, count)
print "\nColumn 2"
for value, count in c2.iteritems():
print '{},{}'.format(value, count)
Giving you the following output:
Column 1
c,85
j,686
u,308
t,1080
Column 2
i,308
a,1080
z,686
u,85
A Counter is a type of Python dictionary that is useful for counting items automatically. c1 holds all of the column 1 entries and c2 holds all of the column 2 entries. Note, Python numbers lists starting from 0, so the first entry in a list is [0].
The csv library loads each line of the file into a list, with each entry in the list representing a different column. The code takes column 7 (i.e. cols[6]) and converts it into an integer, as all columns are held as strings. It is then added to the counter using either the column 1 or 2 value as the key. The result is two dictionaries holding the totaled counts for each key.
You can use pandas:
df = pd.read_csv('my_file.csv', header=None)
print(df.groupby(0)[6].sum())
print(df.groupby(1)[6].sum())
Output:
0
c 85
j 686
t 1080
u 308
Name: 6, dtype: int64
1
a 1080
i 308
u 85
z 686
Name: 6, dtype: int64
The data frame should look like this:
print(df.head())
Output:
0 1 2 3 4 5 6 7 8
0 j z b bsy afj upz 343 13 ruhwd
1 u i a dvp ibt dxv 154 0 adsif
2 t a a jqj dtd yxq 540 49 kxthz
3 j z b bsy afj upz 343 13 ruhwd
4 u i a dvp ibt dxv 154 0 adsif
You can also use your own names for the columns. Like c1, c2, ... c9:
df = pd.read_csv('my_file.csv', index_col=False, names=['c' + str(x) for x in range(1, 10)])
print(df)
Output:
c1 c2 c3 c4 c5 c6 c7 c8 c9
0 j z b bsy afj upz 343 13 ruhwd
1 u i a dvp ibt dxv 154 0 adsif
2 t a a jqj dtd yxq 540 49 kxthz
3 j z b bsy afj upz 343 13 ruhwd
4 u i a dvp ibt dxv 154 0 adsif
5 t a a jqj dtd yxq 540 49 kxthz
6 c u g nfk ekh trc 85 83 xppnl
Now, group by column 1 c1 or column c2 and sum up column 7 c7:
print(df.groupby(['c1'])['c7'].sum())
print(df.groupby(['c2'])['c7'].sum())
Output:
c1
c 85
j 686
t 1080
u 308
Name: c7, dtype: int64
c2
a 1080
i 308
u 85
z 686
Name: c7, dtype: int64
SO isn't supposed to be a code writing service, but I had a few minutes. :) Without Pandas you can do it with the CSV module;
import csv
def sum_to(results, key, add_value):
if key not in results:
results[key] = 0
results[key] += int(add_value)
column1_results = {}
column2_results = {}
with open("input.csv", 'rt') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
sum_to(column1_results, row[0], row[6])
sum_to(column2_results, row[1], row[6])
print column1_results
print column2_results
Results:
{'c': 85, 'j': 686, 'u': 308, 't': 1080}
{'i': 308, 'a': 1080, 'z': 686, 'u': 85}
Your expected results don't seem to match the math that Mike's answer and mine got using your spec. I'd double check that.

Passing Values from table to CTE

I have a table of slot numbers, i.e. warehouse slot numbers.
0110H
0310H
0311H
0312H
0313H
0314H
The table is called WarehouseLocationLimits and the column name is “F1.” The 3rd and 4th digits of the of the above slot numbers represent a maximum number of bays in an aisle of the warehouse.
I have the following code
DECLARE #I INT
SET #I = 1;
WITH CTS(BAY) AS (
SELECT #I
UNION ALL
SELECT BAY + 1 FROM CTS
WHERE BAY < 5
)
SELECT F1, LEFT(F1, 2) AISLE, CAST(SUBSTRING(F1, 3, 2) AS INTEGER) BAYMAX, SUBSTRING(F1, 5, 1) LEVEL, BAY
FROM WarehouseLocationLimits WL, CTS
Where F1 IS NOT NULL
order by F1, BAY
Which generates something like the following:
0110H 01 10 H 1
0110H 01 10 H 2
0110H 01 10 H 3
0110H 01 10 H 4
0110H 01 10 H 5
0310H 03 10 H 1
0310H 03 10 H 2
0310H 03 10 H 3
0310H 03 10 H 4
0310H 03 10 H 5
Note: for each “slot” the CTE is generating 5 lines because of the literal ‘5’ in the WHERE clause of the CTE. I need to pass the value CAST(SUBSTRING(F1, 3, 2) AS INTEGER) of each slot to the CTE. How can I do that?
Thanks in advance for your help
Clyde
Ok. I got it. The solution is the following:
WITH CTS(SLOTNO, AISLE, BAY, LEVEL)
AS (
SELECT F1, LEFT(F1, 2), SUBSTRING(F1, 3, 2), ASCII(SUBSTRING(F1, 5, 1))
FROM WarehouseLocationLimits WL1
WHERE F1 IS NOT NULL
UNION ALL
SELECT WL2.F1, LEFT(WL2.F1, 2), BAY, LEVEL - 1
FROM WarehouseLocationLimits WL2, CTS
WHERE WL2.F1 = CTS.SLOTNO AND LEVEL > ASCII('A')
)
SELECT *, CHAR(LEVEL)
FROM CTS
order by SLOTNO, LEVEL DESC
I didn't understand that the first query (i.e., that one atop the "UNION ALL" is a "seed" query and runs only once for each row of the table. Having understood that, it all fell into place for me.

Using If/Truth Statements with pandas

I tried referencing the pandas documentation but still can't figure out how to proceed.
I have this data
In [6]:
df
Out[6]:
strike putCall
0 50 C
1 55 P
2 60 C
3 65 C
4 70 C
5 75 P
6 80 P
7 85 C
8 90 P
9 95 C
10 100 C
11 105 P
12 110 P
13 115 C
14 120 P
15 125 C
16 130 C
17 135 P
18 140 C
19 145 C
20 150 C
and am trying to run this code:
if df['putCall'] == 'P':
if df['strike']<100:
df['optVol'] = 1
else:
df['optVol'] = -999
else:
if df['strike']>df['avg_syn']:
df['optVol'] = 1
else:
df['optVol']= =-999
I get an error message:
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
the above code and data are example only to illustrate the problem I ran into
Any assistance would be appreciated.
John
OP add-on
The above question was answered very well by Joris, but I have a slight add-on question.
how would I call a function such as
def Bool2(df):
df['optVol'] = df['strike']/100
return(df)
rather than assign the value of optVol directly to 1 in the line:
df.loc[(df['putCall'] == 'P') & (df['strike']>=100), 'optVol'] = 1
I would like to have the function Bool2 called and do the assigning. Obviously, the Bool2 function is much more complicated than I have portrayed.
I tried this (shot in the dark), but it did not work:
df.loc[(df['putCall'] == 'P') & (df['strike']<100), 'optVol'] =df.apply(Bool2,axis=1)
thanks again for the help
Typically, when you want to set values using such a if-else logic, boolean indexing is the solution (see docs):
The logic in:
if df['strike']<100:
df['optVol'] = 1
can be expressed with boolean indexing as:
df.loc[df['strike'] < 100, 'optVol'] = 1
For your example, you have multiple nested if-else, and then you can combine conditions using &:
df.loc[(df['putCall'] == 'P') & (df['strike']>=100), 'optVol'] = 1
The full equivalent of your code above could be like this:
df['optVol'] = -999
df.loc[(df['putCall'] == 'P') & (df['strike']>=100), 'optVol'] = 1
df.loc[(df['putCall'] != 'P') & (df['strike']>df['avg_syn']), 'optVol'] = 1
The reason you get the error message above is because when doing if df['strike']<100 this comparison works elementwise, so df['strike']<100 gives you a Series of True and False values, while if expects a single True or False value.

How to read Fortran fixed-width formatted text file in Python?

I have a Fortran formatted text file (here is 3 first rows):
00033+3251 A B C? 6.96 5.480" 358 9.12 F0V 0.00 2.28s 1.00: 2MASS, dJ=1.3
00033+3251 Aa Ab Aab S1,E 0.62 0.273m 0 9.28 F0V 11.28 K2 1.68* 0.32* SB 1469
00033+3251 Aab Ac A E* 4.26 0.076" 0 9.12 F0V 0.00 2.00s 0.28* 2008MNRAS.383.1506
and the file format description:
--------------------------------------------------------------------------------
Bytes Format Units Label Explanations
--------------------------------------------------------------------------------
1- 10 A10 --- WDS WDS(J2000)
12- 14 A3 --- Primary Designation of the primary
16- 18 A3 --- Secondary Designation of the secondary component
20- 22 A3 --- Parent Designation of the parent (1)
24- 29 A6 --- Type Observing technique/status (2)
31- 35 F5.2 d logP ? Logarithm (10) of period in days
37- 44 F8.3 --- Sep Separation or axis
45 A1 --- x_Sep ['"m] Units of sep. (',",m)
47- 49 I3 deg PA Position angle
51- 55 F5.2 mag Vmag1 V-magnitude of the primary
57- 61 A5 --- SP1 Spectral type of the primary
63- 67 F5.2 mag Vmag2 V-magnitude of the secondary
69- 73 A5 --- SP2 Spectral type of the secondary
75- 79 F5.2 solMass Mass1 Mass of the primary
80 A1 --- MCode1 Mass estimation code for primary (3)
82- 86 F5.2 solMass Mass2 Mass of the secondary
87 A1 --- MCode2 Mass estimation code for secondary (3)
89-108 A20 --- Rem Remark
How to read my file in Python. I have found only read_fwf function from the pandas library.
import pandas as pd
filename = 'systems'
columns = ((0,10),(11,14),(15,18),(19,22),(23,29),(30,35),(36,44),(45,45),(46,49),(50,55),(56,61),(62,67),(68,73),(74,79),(80,80),(81,86),(87,87),(88,108))
data = pd.read_fwf(filename, colspecs = columns, header=None)
Is this the only possible and effective way? I hope I can do this without pandas. Have you any suggestions?
columns = ((0,10),(11,14),(15,18),(19,22),(23,29),(30,35),
(36,44),(44,45),(46,49),(50,55),(56,61),(62,67),
(68,73),(74,79),(79,80),(81,86),(86,87),(88,108))
string=file.readline()
dataline = [ string[c[0]:c[1]] for c in columns ]
note the column indices are (startbyte-1,endbyte) so that a single character field is
eg: (44,45)
this leaves you with a list of strings. You probably want to do conversion to floats, integers, etc. There are a number of questions here on that topic..
There is a module FortranRecordReader but it is weak with the stars, comments, etc that modern fortran files contain. Still, for a nice file, it is useful, in combination with namedtuple. Example:
from fortranformat import FortranRecordReader
fline=FortranRecordReader('(a1,i3,i5,i5,i5,1x,a3,a4,1x,f13.5,f11.5,f11.3,f9.3,1x,a2,f11.3,f9.3,1x,i3,1x,f12.5,f11.5)')
from collections import namedtuple
record=namedtuple('nucleo','cc NZ N Z A el o massexcess uncmassex binding uncbind B beta uncbeta am_int am_float uncatmass')
f=open('AME2012.mas12.ff','r')
for line in f:
nucl=record._make(fline.read(line))
You can try also the module "parse", or write yours
This type of file can be read with astropy tables. The header you show looks a lot like a CDS-formatted ascii table, which has a specific reader implemented for it:
http://astropy.readthedocs.org/en/latest/api/astropy.io.ascii.Cds.html#astropy.io.ascii.Cds
Expanding on arivero's answer, you could use fortranformat from pypi - here is what I would try ...
from fortranformat import FortranRecordReader
fmt = FortranRecordReader('(A10,A3,A3,A3,A6,F5.2,F8.3,A1,I3,F5.2,A5,F5.2,A5,F5.2,A1,F5.2,A1,A20)')
with fh as open('myfile.txt', 'r'):
for line in fh:
line_vals = fmt.read(line)
This should convert the values appropriately to numbers, bool etc.

R function(): how to pass parameters which contain characters and regular expression

my data as follows:
>df2
id calmonth product
1 101 01 apple
2 102 01 apple&nokia&htc
3 103 01 htc
4 104 01 apple&htc
5 104 02 nokia
Now i wanna calculate the number of ids whose products contain both 'apple' and 'htc' when calmonth='01'. Because what i need is not only 'apple' and 'htc', also i need 'apple' and 'nokia',etc.
So i want to realize this by a function like this:
xandy=function(a,b) data.frame(product=paste(a,b,sep='&'),
csum=length(grep('a.*b',x=df2$product))
)
also, i make a parameters list like this:
para=c('apple','htc','nokia')
but the problem is here. When i pass parameters like
xandy(para[1],para[2])
the results is as follows:
product csum
1 apple&htc 0
What my expecting result should be
product csum calmonth
1 apple&htc 2 01
2 apple&htc 0 02
So where is wrong about the parameters passing?
and, how can i add the calmonth in to the function() xandy correctly?
FYI.This question stems from my another question before
What's the R statement responding to SQL's 'in' statement
EDIT AFTER COMMENT
My predictive result will be:
product csum calmonth
1 apple&htc 2 01
2 apple&htc 0 02
May answer is another way how to tackle your problem.
library(stringr)
The function contains will split up the elements of a string vector according to the split character and evaluate if all target words are contained.
contains <- function(x, target, split="&") {
l <- str_split(x, split)
sapply(l, function(x, y) all(y %in% x), y=target)
}
contains(d$product, c("apple", "htc"))
[1] FALSE TRUE FALSE TRUE FALSE
The rest is just subsetting and summarizing
get_data <- function(a, b) {
e <- subset(d, contains(product, c(a, b)))
e$product2 <- paste(a, b, sep="&")
ddply(e, .(calmonth, product2), summarise, csum=length(id))
}
Using the data below, order does not play a role now anymore (see comment below).
get_data("apple", "htc")
calmonth product2 csum
1 1 apple&htc 1
2 2 apple&htc 2
get_data("htc", "apple")
calmonth product2 csum
1 1 htc&apple 1
2 2 htc&apple 2
I know this is not a direct answer to your question but I find this approach quite clean.
EDIT AFTER COMMENT
The reason that you get csum=0 is simply that you are searching for the wrong regex pattern, i.e. a something in between b not for apple ... htc. You need to construct the correct regex pattern,i.e. paste0(a, ".*", b).
Here a complete solution. I would not call it beautiful code, but anyway (note that I change the data to show that it generalizes for months).
library(plyr)
df2 <- read.table(text="
id calmonth product
101 01 apple
102 01 apple&nokia&htc
103 01 htc
104 02 apple&htc
104 02 apple&htc", header=T)
xandy <- function(a, b) {
pattern <- paste0(a, ".*", b)
d1 <- df2[grep(pattern, df2$product), ]
d1$product <- paste0(a,"&", b)
ddply(d1, .(calmonth), summarise,
csum=length(calmonth),
product=unique(product))
}
xandy("apple", "htc")
calmonth csum product
1 1 1 apple&htc
2 2 2 apple&htc