Really struggling with this one... Forgive the longish post.
I have an experiment that on each trial displays some stimulus, collects a response, and then moves on to the next trial.
I would like to incorporate an optimizer that runs in between trials and therefore must have a specific time-window designated by me to run, or it should be terminated. If it's terminated, I would like to return the last set of parameters it tried so that I can use it later.
Generally speaking, here's the order of events I'd like to happen:
In between trials:
Display stimulus ("+") for some number of seconds.
While this is happening, run the optimizer.
If the time for displaying the "+" has elapsed and the optimizer has
not finished, terminate the optimizer, return the most recent set of parameters it tried, and move on.
Here is some of the relevant code I'm working with so far:
do_bns() is the objective function. In it I include NLL['par'] = par or q.put(par)
from scipy.optimize import minimize
from multiprocessing import Process, Manager, Queue
from psychopy import core #for clock, and other functionality
clock = core.Clock()
def optim(par, NLL, q)::
a = minimize(do_bns, (par), method='L-BFGS-B', args=(NLL, q),
bounds=[(0.2, 1.5), (0.01, 0.8), (0.001, 0.3), (0.1, 0.4), (0.1, 1), (0.001, 0.1)],
options={"disp": False, 'maxiter': 1, 'maxfun': 1, "eps": 0.0002}, tol=0.00002)
if __name__ == '__main__':
print('starting optim')
max_time = 1.57
with Manager() as manager:
par = manager.list([1, 0.1, 0.1, 0.1, 0.1, 0.1])
NLL = manager.dict()
q = Queue()
p = Process(target=optim, args=(par, NLL, q))
p.start()
start = clock.getTime()
while clock.getTime() - start < max_time:
p.join(timeout=0)
if not p.is_alive():
break
if p.is_alive():
res = q.get()
p.terminate()
stop = clock.getTime()
print(NLL['a'])
print('killed after: ' + str(stop - start))
else:
res = q.get()
stop = clock.getTime()
print('terminated successfully after: ' + str(stop - start))
print(NLL)
print(res)
This code, on its own, seems to sort of do what I want. For example, the res = q.get() right above the p.terminate() actually takes something like 200ms so it will not terminate exactly at max_time if max_time < ~1.5s
If I wrap this code in a while-loop that checks to see if it's time to stop presenting the stimulus:
stim_start = clock.getTime()
stim_end = 5
print('showing stim')
textStim.setAutoDraw(True)
win.flip()
while clock.getTime() - stim_start < stim_end:
# insert the code above
print('out of loop')
I get weird behavior such as multiple iterations of the whole code from the beginning...
showing stim
starting optim
showing stim
out of loop
showing stim
out of loop
[1.0, 0.10000000000000001, 0.10000000000000001, 0.10000000000000001, 0.10000000000000001, 0.10000000000000001]
killed after: 2.81303440395
Note the multiple 'showing stim's' and 'out of loop's.
I'm open to any solution that accomplishes my goal :|
Help and thank you!
Ben
General remark
Your solution would give me nightmares! I don't see a reason to use multiprocessing here and i'm not even sure how you grab those updated results before termination. Maybe you got your reason for this approach, but i highly recommend something else (which has a limitation).
Callback-based approach
The general idea i would pursue is the following:
fire up your optimizer with some additional time-limit information and some callback enforcing this
the callback is called in each iteration of this optimizer
if time-limit reached: raise a customized Exception
The limits:
as the callback is only called once in each iteration, there is some limited sequence of points in time where the optimizer might get stopped
the potential difference is highly dependent on iteration-time for your problem! (numerical-differentiation, huge-data, slow function eval; all this matters)
if not exceeding some given time is of highest priority, this approach might be not right or you would need some kind of safeguarded interpolation to reason if one more iteration is possible in time
or: combine your kind of killing off workers with my approach of updating intermediate-results through some callback
Example code (bit hacky):
import time
import numpy as np
import scipy.sparse as sp
import scipy.optimize as opt
np.random.seed(1)
""" Fake task: sparse NNLS """
M, N, D = 2500, 2500, 0.1
A = sp.random(M, N, D)
b = np.random.random(size=M)
""" Optimization-setup """
class TimeOut(Exception):
"""Raise for my specific kind of exception"""
def opt_func(x, A, b):
return 0.5 * np.linalg.norm(A.dot(x) - b)**2
def opt_grad(x, A, b):
Ax = A.dot(x) - b
grad = A.T.dot(Ax)
return grad
def callback(x):
time_now = time.time() # probably not the right tool in general!
callback.result = [np.copy(x)] # better safe than sorry -> copy
if time_now - callback.time_start >= callback.time_max:
raise TimeOut("Time out")
def optimize(x0, A, b, time_max):
result = [np.copy(x0)] # hack: mutable type
time_start = time.time()
try:
""" Add additional info to callback (only takes x as param!) """
callback.time_start = time_start
callback.time_max = time_max
callback.result = result
res = opt.minimize(opt_func, x0, jac=opt_grad,
bounds=[(0, np.inf) for i in range(len(x0))], # NNLS
args=(A, b), callback=callback, options={'disp': False})
except TimeOut:
print('time out')
return result[0], opt_func(result[0], A, b)
return res.x, res.fun
print('experiment 1')
start_time = time.perf_counter()
x, res = optimize(np.zeros(len(b)), A, b, 0.1) # 0.1 seconds max!
end_time = time.perf_counter()
print(res)
print('used secs: ', end_time - start_time)
print('experiment 2')
start_time = time.perf_counter()
x_, res_ = optimize(np.zeros(len(b)), A, b, 5) # 5 seconds max!
end_time = time.perf_counter()
print(res_)
print('used secs: ', end_time - start_time)
Example output:
experiment 1
time out
422.392771467
used secs: 0.10226839151517493
experiment 2
72.8470708728
used secs: 0.3943936788825996
Related
This paper describes Pyomo's Differential and Algebraic Equations framework. It also mentions multi-stage problems; however, it does not show a complete example of such a problem. Does such an example exist somewhere?
The following demonstrates a complete minimum working example of a multi-stage optimization problem using Pyomo's DAE system:
#!/usr/bin/env python3
#http://www.gpops2.com/Examples/OrbitRaising.html
from pyomo.environ import *
from pyomo.dae import *
from pyomo.opt import SolverStatus, TerminationCondition
import random
import matplotlib.pyplot as plt
T = 10 #Maximum time for each stage of the model
STAGES = 3 #Number of stages
m = ConcreteModel() #Model
m.t = ContinuousSet(bounds=(0,T)) #Time variable
m.stages = RangeSet(0, STAGES) #Stages in the range [0,STAGES]. Can be thought of as an integer-valued set
m.a = Var(m.stages, m.t) #State variable defined for all stages and times
m.da = DerivativeVar(m.a, wrt=m.t) #First derivative of state variable with respect to time
m.u = Var(m.stages, m.t, bounds=(0,1)) #Control variable defined for all stages and times. Bounded to range [0,1]
#Setting the value of the derivative.
def eq_da(m,stage,t): #m argument supplied when function is called. `stage` and `t` are given values from m.stages and m.t (see below)
return m.da[stage,t] == m.u[stage,t] #Derivative is proportional to the control variable
m.eq_da = Constraint(m.stages, m.t, rule=eq_da) #Call constraint function eq_da for each unique value of m.stages and m.t
#We need to connect the different stages together...
def eq_stage_continuity(m,stage):
if stage==m.stages.last(): #The last stage doesn't connect to anything
return Constraint.Skip #So skip this constraint
else:
return m.a[stage,T]==m.a[stage+1,0] #Final time of each stage connects with the initial time of the following stage
m.eq_stage_continuity = Constraint(m.stages, rule=eq_stage_continuity)
#Boundary conditions
def _init(m):
yield m.a[0,0] == 0 #Initial value (at zeroth stage and zeroth time) of `a` is 0
yield ConstraintList.End
m.con_boundary = ConstraintList(rule=_init) #Repeatedly call `_init` until `ConstraintList.End` is returned
#Objective function: maximize `a` at the end of the final stage
m.obj = Objective(expr=m.a[STAGES,T], sense=maximize)
#Get a discretizer
discretizer = TransformationFactory('dae.collocation')
#Disrectize the model
#nfe (number of finite elements)
#ncp (number of collocation points within finite element)
discretizer.apply_to(m,nfe=30,ncp=6,scheme='LAGRANGE-RADAU')
#Get a solver
solver = SolverFactory('ipopt', keepfiles=True, log_file='/z/log', soln_file='/z/sol')
solver.options['max_iter'] = 100000
solver.options['print_level'] = 1
solver.options['linear_solver'] = 'ma27'
solver.options['halt_on_ampl_error'] = 'yes'
#Solve the model
results = solver.solve(m, tee=True)
print(results.solver.status)
print(results.solver.termination_condition)
#Retrieve the results in a pleasant format
r_t = [t for s in sorted(m.stages) for t in sorted(m.t)]
r_a = [value(m.a[s,t]) for s in sorted(m.stages) for t in sorted(m.t)]
r_u = [value(m.u[s,t]) for s in sorted(m.stages) for t in sorted(m.t)]
plt.plot(r_t, r_a, label="r_a")
plt.plot(r_t, r_u, label="r_u")
plt.legend()
plt.show()
I have a list of lists like so:
import numpy as np
import random
import time
import itertools
N = 1000
x =np.random.random((N,N))
y = np.zeros((N,N))
z = np.random.random((N,N))
list_of_lists = [[x, y], [y,z], [z,x]]
and for each sublist I want to calculate the number of non zeros, the mean and the standard deviation.
I have done that like so:
distribution = []
alb_mean = []
alb_std = []
start = time.time()
for i in range(len(list_of_lists)):
one_mean = []
non_zero_l = []
one_list = list_of_lists[i]
for n in one_list:
#count non_zeros
non_zero_count = np.count_nonzero(n)
non_zero_l.append(non_zero_count)
#assign nans
n = n.astype(float)
n[n == 0.0] = np.nan
#flatten the matrix
n = np.array(n.flatten())
one_mean.append(n)
#append means and stds
distribution.append(sum(non_zero_l))
alb_mean.append(np.nanmean(one_mean))
alb_std.append(np.nanstd(one_mean))
end = time.time()
print "Loop took {} seconds".format((end - start))
which takes 0.23 seconds.
I tried to make this faster with a second option:
distribution = []
alb_mean = []
alb_std = []
start = time.time()
for i in range(len(list_of_lists)):
for_mean = []
#get one list
one_list = list_of_lists[i]
#flatten the list
chain = itertools.chain(*one_list)
flat = list(chain)
#count non_zeros
non_zero_count = np.count_nonzero(flat)
distribution.append(non_zero_count)
#remove zeros
remove_zero = np.setdiff1d(flat ,[0.0])
alb_mean.append(np.nanmean(remove_zero))
alb_std.append(np.nanstd(remove_zero))
end = time.time()
print "Loop took {} seconds".format((end - start))
which is actually slower and takes 0.88 seconds.
The sheer amount of loops has me thinking there is a better way to do this. I have tried numba but it doesn't seam to like appending in a function.
Version #1
Well in your sample with the loopy solution, you are looping with two loops - One with 3 iterations and another with 2 iterations. So, it's already close to being a vectorized one. The only bottlenecks being the append steps.
Going fully vectorized, here's one approach -
a = np.array(list_of_lists, dtype=float)
zm = a!=0
avgs = np.einsum('ijkl,ijkl->i',zm,a)/zm.sum(axis=(1,2,3)).astype(float)
a[~zm] = np.nan
stds = np.nanstd(a, axis=(1,2,3))
Using the same setup as in the question, here's what I get on timings -
Loop took 0.150925159454 seconds
Proposed solution took 0.121352910995 seconds
Version #2
We could compute std using average, thus re-use avgs for further boost :
Thus, a modified version would be -
a = np.asarray(list_of_lists)
zm = a!=0
N = zm.sum(axis=(1,2,3)).astype(float)
avgs = np.einsum('ijkl,ijkl->i',zm,a)/N
diffs = ((a-avgs[:,None,None,None])**2)
stds = np.sqrt(np.einsum('ijkl,ijkl->i',zm,diffs)/N)
Updated timings -
Loop took 0.155035018921 seconds
Proposed solution took 0.0648851394653 seconds
I have the necessity to calculate more then one accuracy in the same time, concurrently.
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
The piece of code is the same of the mnist example in the tutorial of TensorFlow but instead of having:
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
I have two placeolder because I already calculated and stored them.
W = tf.placeholder(tf.float32, [784, 10])
b = tf.placeholder(tf.float32, [10])
I want to fill the network with the values I aready have and then calculate the accuracy and this have to happen for each network I loaded.
So if I load 20 networks I want to calculate in parallel the accuracy for each one. There is a way with the session run to execute the same operation with different input?
You have multiple options to make things happen in parallel:
Parallelize using multiple python threads / subprocesses. (See Python's "multiprocessing" library.)
Batch up the operations into single larger operations. (e.g. Similar to the image operations that operate on a batch of images simultaneously https://www.tensorflow.org/api_docs/python/image/resizing#resize_bilinear.)
Make a single graph that has the 20 network accuracy calculations.
I think the last one is the easiest, so I've included a bit of sample code below to get you started:
import tensorflow as tf
def construct_accuracy_calculation(i):
W = tf.placeholder(tf.float32, [784, 10], name=("%d_W" % i))
b = tf.placeholder(tf.float32, [10], name=("%d_b" % i))
# ...
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return (W, b, accuracy)
def main():
accuracy_computations = []
feed_dict={}
for i in xrange(NUM_NETWORKS):
(W, b) = load_network(i)
(W_op, b_op, accuracy) = construct_accuracy_calculation(i)
feed_dict[W_op] = W
feed_dict[b_op] = b
accuracy_computations.append(accuracy)
# sess = ...
accuracy_values = sess.run(accuracy_computations, feed_dict=feed_dict)
if __name__ == "__main__":
main()
One approach to parallelizing TF computations is to execute run calls in parallel using threads (TF is incompatible with multiprocessing). It's a bit more complicated than other approaches because you have to handle parallelism yourself on the Python side.
Here's an example that runs same matmul op in same session in different Python threads with different fed inputs and runs about 4x faster with 4 threads compared to 1 thread
import os, sys, queue, threading, time
import tensorflow as tf
import numpy as np
def p(s):
# helper function for printing from multiple threads
# need to append \n or results get intermixed in notebook
print(s+"\n", flush=True, end="")
num_threads = 4
data_size = 32 # number of data points to enqueue
work_per_thread = data_size/num_threads
timeout = 10 # grace period for dequeing
input_queue = queue.Queue(data_size)
output_queue = queue.Queue(data_size)
dtype = np.float32
# use matrix vector matmul since it's compute intensive and uses single core
# see issue #6752
n = 16*1024
with tf.device("/cpu:0"):
x = tf.placeholder(dtype)
matrix = tf.Variable(tf.ones((n, n)))
vector = tf.Variable(tf.ones((n, 1)))
y = tf.matmul(matrix, vector)[0, 0] + x
# turn off graph-rewriting optimizations
sess = tf.Session(config=tf.ConfigProto(graph_options=tf.GraphOptions(optimizer_options=tf.OptimizerOptions(opt_level=tf.OptimizerOptions.L0))))
sess.run(tf.global_variables_initializer())
done = False
def runner(runner_id):
p("Starting runner %s" % (runner_id,))
count = 0
while not done:
try:
x_val = input_queue.get(timeout=1)
except queue.Empty:
# retry on empty queue
continue
p("Start computing %d on %d" %(x_val, runner_id))
out = sess.run(y, {x: x_val})
count+=1
output_queue.put(out)
if count>=work_per_thread:
break
else:
p("Stopping runner "+str(runner_id))
threads = []
print("Creating threads.")
for i in range(num_threads):
t = threading.Thread(target=runner, args=(i,))
threads.append(t)
for i in range(data_size):
input_queue.put(i, timeout=timeout)
# start threads
p("Launching runners.")
start_time = time.time()
for t in threads:
t.start()
p("Reading results.")
for i in range(data_size):
try:
p("Main thread: obtained %.2f" % (output_queue.get(timeout=timeout),))
except queue.Empty:
print("No results after %d, terminating computation."%(timeout,))
break
else:
p("Computed successfully.")
done = True
p("Waiting for threads to finish.")
for t in threads:
t.join()
print("Done in %.2f seconds" %(time.time() - start_time))
I need to reduce the running time for quad() in python (I am integrating some thousands integrals). I found a similar question in here where they suggested to do several integrations and add the partial values. However that does not improve performance. Any thoughts? here is a simple example:
import numpy as np
from scipy.integrate import quad
from scipy.stats import norm
import time
funcB = lambda x: norm.pdf(x,0,1)
start = time.time()
good_missclasified,_ = quad(funcB, 0,3.3333)
stop = time.time()
time_elapsed = stop - start
print ('quad : ' + str(time_elapsed))
start = time.time()
num = np.linspace(0,3.3333,10)
Lv = []
last, lastG = 0, 0
for g in num:
Lval,x = quad(funcB, lastG, g)
last, lastG = last + Lval, g
Lv.append(last)
Lv = np.array(Lv)
stop = time.time()
time_elapsed = stop - start
print ('10 int : ' + str(time_elapsed))
print(good_missclasified,Lv[9])
quadpy (a project of mine) is vectorized and can integrate a function over many domains (e.g., intervals) at once. You do have to choose your own integration method though.
import numpy
import quadpy
a = 0.0
b = 1.0
n = 100
start_points = numpy.linspace(a, b, n, endpoint=False)
h = (b-a) / n
end_points = start_points + h
intervals = numpy.array([start_points, end_points])
scheme = quadpy.line_segment.gauss_kronrod(3)
vals = scheme.integrate(numpy.exp, intervals)
print(vals)
[0.10050167 0.10151173 0.10253194 0.1035624 0.10460322 0.1056545
0.10671635 0.10778886 0.10887216 0.10996634 0.11107152 0.11218781
0.11331532 0.11445416 0.11560444 0.11676628 0.1179398 0.11912512
0.12032235 0.12153161 0.12275302 0.12398671 0.12523279 0.1264914
0.12776266 0.1290467 0.13034364 0.13165362 0.13297676 0.1343132
0.13566307 0.1370265 0.13840364 0.13979462 0.14119958 0.14261866
0.144052 0.14549975 0.14696204 0.14843904 0.14993087 0.15143771
0.15295968 0.15449695 0.15604967 0.157618 0.15920208 0.16080209
0.16241818 0.16405051 0.16569924 0.16736455 0.16904659 0.17074554
0.17246156 0.17419482 0.17594551 0.17771379 0.17949985 0.18130385
0.18312598 0.18496643 0.18682537 0.188703 0.1905995 0.19251505
0.19444986 0.19640412 0.19837801 0.20037174 0.20238551 0.20441952
0.20647397 0.20854907 0.21064502 0.21276204 0.21490033 0.21706012
0.21924161 0.22144502 0.22367058 0.22591851 0.22818903 0.23048237
0.23279875 0.23513842 0.2375016 0.23988853 0.24229945 0.2447346
0.24719422 0.24967857 0.25218788 0.25472241 0.25728241 0.25986814
0.26247986 0.26511783 0.2677823 0.27047356]
I'm writting a python code to get percentage of each bytes contained in a file. Then check if percentage is less than a given limit and display the byte value (as hex) + percentage if over.
My code works great but it is very time consuming. It take approx 1 minute for a 190KB file.
import time
def string2bytes(data):
return "".join("{:02x}".format(ord(c)) for c in data)
startTime = time.time()
# get datas from file
f = open("myfile.bin","rb")
filedata = f.read()
size = f.tell()
f.close
# count each data, check percentage and store to a dictionnary if upper than 0.50%
ChkResult = True
r = {}
for data in filedata:
c = float(filedata.count(data)) / size * 100
if c > 0.50:
ChkResult = False
tag = string2bytes(data).upper()
r[tag] = c
# print result
if ChkResult:
print "OK"
else:
print "DANGER!"
print "Any bytes be less than 0.50%%."
for x in sorted(r.keys()):
print " 0x%s is %.2f%%"%((x), r[x])
print "Done in %.2f seconds."%(time.time() - startTime)
Do you have any idea to reduce this time with same result? Staying with python 2.7.x (for many reasons).
Many thanks.
Use Counter[docs] to prevent O(n^2) time:
You are calling count n times. count is O(n).
import time
from collections import Counter
def string2bytes(data):
return "".join("{:02x}".format(ord(c)) for c in data)
startTime = time.time()
# get datas from file
f = open("myfile.bin","rb")
filedata = f.read()
size = f.tell()
f.close
# count each data, check percentage and store to a dictionnary if upper than 0.50%
ChkResult = True
r = {}
for k,v in Counter(filedata).items():
c = float(v) / size * 100
if c > 0.50:
ChkResult = False
tag = string2bytes(k).upper()
r[tag] = c
# print result
if ChkResult:
print "OK"
else:
for x in sorted(r.keys()):
print " 0x%s is %.2f%%"%((x), r[x])
print "Done in %.2f seconds."%(time.time() - startTime)
or slightly more succinctly:
import time
from collections import Counter
def fmt(data):
return "".join("{:02x}".format(ord(c)) for c in data).upper()
def pct(v, size):
return float(v) / size * 100
startTime = time.time()
with open("myfile.bin","rb") as f:
counts = Counter(f.read())
size = f.tell()
threshold = size * 0.005
err = {fmt(k):pct(v, size) for k,v in counts.items() if v > threshold }
if not err:
print "OK"
else:
for k,v in sorted(err.items()):
print " 0x{} is {:.2f}%".format(k, v)
print "Done in %.2f seconds."%(time.time() - startTime)
If there is a need for speed:
I was curious so I tried a homespun version of counter. I actually thought it would Not be faster but I am getting better performance than collections.Counter.
import collections
def counter(s):
'''counts the (hashable) things in s
returns a collections.defaultdict -> {thing:count}
'''
a = collections.defaultdict(int)
for c in s:
a[c] += 1
return a
This could be substituted into #DTing s solution - I wouldn't change any of that.
Guess it wasn't homespun at all, it is listed in the defaultdict examples in the docs.