I am struggling to find an efficient way of retrieving the solution to an optimization problem. The solution consists of around 200K variables that I would like in a pandas DataFrame. After searching online the only approaches I found for accessing the variables was through a for loop which looks something like this:
instance = M.create_instance('input.dat') # reading in a datafile
results = opt.solve(instance, tee=True)
results.write()
instance.solutions.load_from(results)
for v in instance.component_objects(Var, active=True):
print ("Variable",v)
varobject = getattr(instance, str(v))
for index in varobject:
print (" ",index, varobject[index].value)
I know I can use this for loop to store them in a dataframe but this is pretty inefficient.
I found out how to access the indexes by using
import pandas as pd
index = pd.DataFrame(instance.component_objects(Var, active=True))
But I dont know how to get the solution
There is actually a very simple and elegant solution, using the method pandas.DataFrame.from_dict combined with the Var.extract_values() method.
from pyomo.environ import *
import pandas as pd
m = ConcreteModel()
m.N = RangeSet(5)
m.x = Var(m.N, rule=lambda _, el: el**2) # x = [1,4,9,16,25]
df = pd.DataFrame.from_dict(m.x.extract_values(), orient='index', columns=[str(m.x)])
print(df)
yields
x
1 1
2 4
3 9
4 16
5 25
Note that for Var we can use both get_values() and extract_values(), they seem to do the same. For Param there is only extract_values().
Of course you can use instance.some_var.pprint() to print it on the screen.
But if you have a variable indexed by a large set. You can also write it to a
seperate file. The following code writes the result to a .txt file:
f = open('Result.txt', 'a')
instance.some_var.pprint(f)
f.close()
I had the same issue as Jasper and tried the suggested solutions. By doing so I noticed, that the part writing the results takes most time. Maybe this is also true in Jasper's case.
results.write()
instance.solutions.load_from(results)
So I suggest to surpress this two lines if you can do so. Maybe someone has a suggestions how to speed this up? Or an alternative method.
Also I saw that in this post (Pyomo: Save results to CSV files) The "for loop" method is recomanded. A pyomo developer states:"I think it's possible in option 2 for the indices and the variable slice to be iterated over in a different order which would invalidate your resulting array."
For simplicity of code and to largely avoid for-loops, I found the pyomoio module in the urbs project, which has taken over the slightly deprecated code of pandaspyomo.py. It relies on each pyomo object's iteritem() method, and handles multiple dimensions elegantly. It can extract sets, parameters, variables as pandas objects.
If I set up a small pyomo model
from pyomo.environ import *
import pyomoio as po
import pandas as pd
# Define a model with 200k values
m = ConcreteModel()
m.ix = RangeSet(200000)
def idem(model, i):
return i
m.a = Param(m.ix, rule=idem)
I can read in the parameter with just one line of code
%%timeit
a_po = po.get_entity(m, 'a')
# 110 ms ± 1.88 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
However, if I compare it to the approach in the original question, it is not faster, even a little slower:
%%timeit
val = []
ix = []
varobject = getattr(m, 'a')
for index in varobject:
ix.append(index)
val.append(varobject[index])
a = pd.Series(index=ix, data=val)
# 92.5 ms ± 1.57 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Related
I have a pandas DataFrame with 50 rows and 22000 columns, and I would like to calculate a distance correlation (dcor package) between each pair of columns. The code that I created (with a serial-processing and a portion of the data) is:
import pandas as pd
import dcor
DF = pd.DataFrame({'X':[0.72,-0.25,-1.2,-3],'Y':[-0.128,0.2,2,5.6],'Z':[15,-0.425,-0.3,-5]})
DCOR_REZ=pd.DataFrame(index=['X','Y','Z'],columns=['X','Y','Z'])
col_names=DCOR_REZ.columns.tolist()
k=0
for i in col_names:
v1=DF.loc[:,i].as_matrix()
for j in col_names[k:]:
v2=DF.loc[:,j].as_matrix()
rez=dcor.distance_correlation(v1,v2)
DCOR_REZ.at[i,j]=rez
DCOR_REZ.at[j,i]=rez
k=k+1
print DCOR_REZ
X Y Z
X 1 0.981778 0.854349
Y 0.981778 1 0.726328
Z 0.854349 0.726328 1
To execute this code on a full DataFrame I need 21h!.
Since my server has 40 processors I was thinking to cut the time by 40 and get the results in ~30 minutes but I don't know how to rewrite this code for parallel processing.
How can I rewrite the code?
Any help is appreciated.
I am the creator of the dcor package. One problem of this approach is that the pairwise distance matrices for each column are computed on each iteration, instead of just once. If you have enough memory, you could compute those matrices beforehand, and then compute the distance correlation:
import pandas as pd
import dcor
import numpy as np
from scipy.spatial.distance import pdist, squareform
DF = pd.DataFrame({'X':[0.72,-0.25,-1.2,-3],'Y':[-0.128,0.2,2,5.6],'Z':[15,-0.425,-0.3,-5]})
DCOR_REZ=pd.DataFrame(index=['X','Y','Z'],columns=['X','Y','Z'])
col_names=DCOR_REZ.columns.tolist()
k=0
dict_centered_matrices = {}
def compute_matrix(i):
v1=DF.loc[:,i].as_matrix()
v1_dist = squareform(pdist(v1[:, np.newaxis]))
return (i, dcor.double_centered(v1_dist))
dict_centered_matrices = dict(map(compute_matrix, col_names))
for i in col_names:
v1_centered = dict_centered_matrices[i]
for j in col_names[k:]:
v2_centered = dict_centered_matrices[j]
rez=np.sqrt(
dcor.average_product(v1_centered, v2_centered)/np.sqrt(
dcor.average_product(v1_centered, v1_centered)*
dcor.average_product(v2_centered, v2_centered)))
DCOR_REZ.at[i,j]=rez
DCOR_REZ.at[j,i]=rez
k=k+1
print(DCOR_REZ)
This should make your code faster, at the expense of consuming more memory. I will consider adding convenience functions for this case, as it seems a common one. You can also try parallelizing the code using the multiprocessing module, and replacing the map function with the map method of a Pool instance.
Since dcor version 0.5 I have added a rowwise method with this explicit purpose in mind. It will parallelize the computation using available cores when the right conditions are met (basically, when the distance covariance/correlation is computed between random variables and not random vectors, by default). Sorry for the delay in implementing this.
I am working with a code in Pandas that involves reading a lot of files and then performing various operations on each file inside a loop (which iterates over a file list).
I am trying to convert this to a Dask-based approach instead of a Pandas-based approach and have the following attempt so far - I am new to Dask and need to ask about whether this is a reasonable approach.
Here is what the input data looks like:
A X1 X2 X3 A_d S_d
0 1.0 0.475220 0.839753 0.872468 1 1
1 2.0 0.318410 0.940817 0.526758 2 2
2 3.0 0.053959 0.056407 0.169253 3 3
3 4.0 0.900777 0.307995 0.689259 4 4
4 5.0 0.670465 0.939116 0.037865 5 5
Here is the code:
import dask.dataframe as dd
import numpy as np; import pandas as pd
def my_func(df,r): # perform representative calculations
q = df.columns.tolist()
df2 = df.loc[:,q[1:]] / df.loc[:,q()[1:]].sum()
df2['A'] = df['A']
df2 = df2[ ( df2['A'] >= r[0] ) & ( df2['A'] <= r[1] ) ]
c = q[1:-2]
A = df2.loc[:,c].sum()
tx = df2.loc[:,c].min() * df2.loc[:,c].max()
return A - tx
list_1 = []
for j in range(1,13):
df = dd.read_csv('Test_file.csv')
N = my_func(df,[751.7,790.4]) # perform calculations
out = ['X'+str(j)+'_2', df['A'].min()] + N.compute().tolist()
list_1.append(out)
df_f = pd.DataFrame(list_1)
my_func returns a Dask Series N. Currently, I must .compute() the Dask Series before I can convert it into a list. I am having trouble overcoming this.
Is it possible to vertically append N (which is a Dask Series) as a row to a blank Dask DF? eg. in Pandas, I tend to do
this: df_N = pd.DataFrame() would go outside the for loop and
then something like df_N = pd.concat([df_N,N],axis=0). This would
allow a Dask DF to be built up in the for loop. After that
(outside the loop), I could easily just horizontally concatenate the
built-up Dask DF to pd.DataFrame(list_1).
Another approach is to create a single row Dask DF from the Dask
series N. Then, vertically concatenate this single row DF to a
blank Dask DF (that was created outside the loop). Is it possible in Dask to create single row Dask DataFrame
from a Series?
Additional Information (if needed):
In my real code, I am reading from a *.csv file inside a loop. For this reason, when I generated a sample dastaset, I wrote it to a *.csv file in order to use dd.read_csv() inside the loop.
df2s['A'] = df['A'] - this line is needed since the line above it omits column A (during a normalization of each column to its sum) and produces new DataFrame. df2s['A'] = df['A'] adds column A back to the new DataFrame.
Is it possible to vertically append N (which is a Dask Series) as a row to a blank Dask DF? eg. in Pandas, I tend to do this: df_N = pd.DataFrame() would go outside the for loop and then something like df_N = pd.concat([df_N,N],axis=0). This would allow a Dask DF to be built up in the for loop. After that (outside the loop), I could easily just horizontally concatenate the built-up Dask DF to pd.DataFrame(list_1).
You should never append rows to either a Pandas dataframe or a Dask dataframe. This is very inefficient. Instead it is better to collect many pandas/dask dataframes together and then call the pd.concat or dd.concat function.
Also I note that you are calling compute within your for loop. It is recommended to call compute only after you have set up your entire computation if possible. Otherwise you are probably not getting much parallelism.
Note: I haven't actually gone through the trouble of understanding your code. I'm just responding to the questions at the end. Hopefully someone else comes along with a more comprehensive answer.
I have been experimenting with sklearn's Tfidfvectorizer.
I am only concerned with TF, and not idf, so my settings have use_idf = FALSE
Complete settings are:
vectorizer = TfidfVectorizer(max_df=0.5, max_features= n_features,
ngram_range=(1,3), use_idf=False)
I have been trying to replicate the output of .fit_transform but haven't managed to do it so far and was hoping someone could explain the calculations for me.
My toy example is:
document = ["one two three one four five",
"two six eight ten two"]
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
n_features = 5
vectorizer = TfidfVectorizer(max_df=0.5, max_features= n_features,
ngram_range=(1,3), use_idf=False)
X = vectorizer.fit_transform(document)
count = CountVectorizer(max_df=0.5, max_features= n_features,
ngram_range=(1,3))
countMat = count.fit_transform(document)
I have assumed the counts from the Count Vectorizer will be the same as the counts used int he Tfidf Vectorizer. So am trying to change the countMat object to match X.
I had missed a line from the documentation which says
Each row is normalized to have unit euclidean norm
So to anwer my own question - the answer is:
for i in xrange(countMat.toarray().__len__()):
row = countMat.toarray()[i]
row / np.sqrt(np.sum(row**2))
Although I am sure there is a more elegant way to code the result.
I am trying to calculate the perplexity for the data I have. The code I am using is:
import sys
sys.path.append("/usr/local/anaconda/lib/python2.7/site-packages/nltk")
from nltk.corpus import brown
from nltk.model import NgramModel
from nltk.probability import LidstoneProbDist, WittenBellProbDist
estimator = lambda fdist, bins: LidstoneProbDist(fdist, 0.2)
lm = NgramModel(3, brown.words(categories='news'), True, False, estimator)
print lm
But I am receiving the error,
File "/usr/local/anaconda/lib/python2.7/site-packages/nltk/model/ngram.py", line 107, in __init__
cfd[context][token] += 1
TypeError: 'int' object has no attribute '__getitem__'
I have already performed Latent Dirichlet Allocation for the data I have and I have generated the unigrams and their respective probabilities (they are normalized as the sum of total probabilities of the data is 1).
My unigrams and their probability looks like:
Negroponte 1.22948976891e-05
Andreas 7.11290670484e-07
Rheinberg 7.08255885794e-07
Joji 4.48481435106e-07
Helguson 1.89936727391e-07
CAPTION_spot 2.37395965468e-06
Mortimer 1.48540253778e-07
yellow 1.26582575863e-05
Sugar 1.49563800878e-06
four 0.000207196011781
This is just a fragment of the unigrams file I have. The same format is followed for about 1000s of lines. The total probabilities (second column) summed gives 1.
I am a budding programmer. This ngram.py belongs to the nltk package and I am confused as to how to rectify this. The sample code I have here is from the nltk documentation and I don't know what to do now. Please help on what I can do. Thanks in advance!
Perplexity is the inverse probability of the test set, normalized by the number of words. In the case of unigrams:
Now you say you have already constructed the unigram model, meaning, for each word you have the relevant probability. Then you only need to apply the formula. I assume you have a big dictionary unigram[word] that would provide the probability of each word in the corpus. You also need to have a test set. If your unigram model is not in the form of a dictionary, tell me what data structure you have used, so I could adapt it to my solution accordingly.
perplexity = 1
N = 0
for word in testset:
if word in unigram:
N += 1
perplexity = perplexity * (1/unigram[word])
perplexity = pow(perplexity, 1/float(N))
UPDATE:
As you asked for a complete working example, here's a very simple one.
Suppose this is our corpus:
corpus ="""
Monty Python (sometimes known as The Pythons) were a British surreal comedy group who created the sketch comedy show Monty Python's Flying Circus,
that first aired on the BBC on October 5, 1969. Forty-five episodes were made over four series. The Python phenomenon developed from the television series
into something larger in scope and impact, spawning touring stage shows, films, numerous albums, several books, and a stage musical.
The group's influence on comedy has been compared to The Beatles' influence on music."""
Here's how we construct the unigram model first:
import collections, nltk
# we first tokenize the text corpus
tokens = nltk.word_tokenize(corpus)
#here you construct the unigram language model
def unigram(tokens):
model = collections.defaultdict(lambda: 0.01)
for f in tokens:
try:
model[f] += 1
except KeyError:
model [f] = 1
continue
N = float(sum(model.values()))
for word in model:
model[word] = model[word]/N
return model
Our model here is smoothed. For words outside the scope of its knowledge, it assigns a low probability of 0.01. I already told you how to compute perplexity:
#computes perplexity of the unigram model on a testset
def perplexity(testset, model):
testset = testset.split()
perplexity = 1
N = 0
for word in testset:
N += 1
perplexity = perplexity * (1/model[word])
perplexity = pow(perplexity, 1/float(N))
return perplexity
Now we can test this on two different test sets:
testset1 = "Monty"
testset2 = "abracadabra gobbledygook rubbish"
model = unigram(tokens)
print perplexity(testset1, model)
print perplexity(testset2, model)
for which you get the following result:
>>>
49.09452736318415
99.99999999999997
Note that when dealing with perplexity, we try to reduce it. A language model that has less perplexity with regards to a certain test set is more desirable than one with a bigger perplexity. In the first test set, the word Monty was included in the unigram model, so the respective number for perplexity was also smaller.
Thanks for the code snippet! Shouldn't:
for word in model:
model[word] = model[word]/float(sum(model.values()))
be rather:
v = float(sum(model.values()))
for word in model:
model[word] = model[word]/v
Oh ... I see was already answered ...
I have a dataset which the dimension is around 2,000 (rows) x 120,000 (columns).
And I'd like to pick up certain columns (~8,000 columns).
So the file dimension would be 2,000 (rows) x 8,000 (columns).
Here is the code written by a good man (I searched from stackoverflow but I am sorry I have forgotten his name).
import pandas as pd
df = pd.read_csv('...mydata.csv')
my_query = pd.read_csv('...myquery.csv')
df[list['Name'].unique()].to_csv('output.csv')
However, the result shows MemoryError in my console, which means the code may not work quite well.
So does anyone know how to improve the code with more efficient way to select the certain columns?
I think I found your source.
So, my solution use read_csv with arguments:
iterator=True - if True, return a TextFileReader to enable reading a file into memory piece by piece
chunksize=1000 - an number of rows to be used to “chunk” a file into pieces. Will cause an TextFileReader object to be returned
usecols=subset - a subset of columns to return, results in much faster parsing time and lower memory usage
Source.
I filter large dataset with usecols - I use only dataset (2 000, 8 000) instead (2 000, 120 000).
import pandas as pd
#read subset from csv and remove duplicate indices
subset = pd.read_csv('8kx1.csv', index_col=[0]).index.unique()
print subset
#use subset as filter of columns
tp = pd.read_csv('input.csv',iterator=True, chunksize=1000, usecols=subset)
df = pd.concat(tp, ignore_index=True)
print df.head()
print df.shape
#write to csv
df.to_csv('output.csv',iterator=True, chunksize=1000)
I use this snippet for testing:
import pandas as pd
import io
temp=u"""A,B,C,D,E,F,G
1,2,3,4,5,6,7"""
temp1=u"""Name
B
B
C
B
C
C
E
F"""
subset = pd.read_csv(io.StringIO(temp1), index_col=[0]).index.unique()
print subset
#use subset as filter of columns
df = pd.read_csv(io.StringIO(temp), usecols=subset)
print df.head()
print df.shape