I am trying to get the percentage. Based on the code below, what else should I do?
- ********#Libraries
import RPi.GPIO as GPIO #Import GPIO library
import time #Import time library
GPIO.setmode(GPIO.BCM) #GPIO Mode (BOARD / BCM)
GPIO_TRIGGER = 18
GPIO_ECHO = 24 #set GPIO Pins
#set GPIO direction (IN / OUT)
GPIO.setup(GPIO_TRIGGER, GPIO.OUT)
GPIO.setup(GPIO_ECHO, GPIO.IN)
def distance():
GPIO.output(GPIO_TRIGGER, True) # set Trigger to HIGH
time.sleep(0.00001)
GPIO.output(GPIO_TRIGGER, False) # set Trigger after 0.01ms to LOW
StartTime = time.time()
StopTime = time.time()
while GPIO.input(GPIO_ECHO) == 0: # save StartTime
StartTime = time.time()
while GPIO.input(GPIO_ECHO) == 1: # save time of arrival
StopTime = time.time()
TimeElapsed = StopTime - StartTime # time difference between start and arrival
# multiply with the sonic speed (34300 cm/s)
# and divide by 2, because there and back
distance = (TimeElapsed * 34300) / 2
return distance
def percentage(temp) :
percentage = round(((binheight - temp)/binheight)*100, 2)
return percentage
if __name__ == '__main__':
try:
while True:
dist = distance()
percent = percentage(dist)
print ("Measured Distance = %.1f cm" % dist)
print ("Percentage", percent, "%")
time.sleep(1)
except KeyboardInterrupt: # Reset by pressing CTRL + C
print("Measurement stopped by User")
GPIO.cleanup()********
import datetime
start = datetime.fromtimestamp(float(1485008513.00000))
end = datetime.fromtimestamp(float(1485788517.80000))
#Duration
duration = end - start
My result is :
9 days, 0:40:04.800000
But it must be like this (without days, only hours, minutes and seconds) :
216:40:04.800000
Thanks a lot !
Not elegant, but works (for your example, durations less then a day and much more then 1000 days) - but its ugly:
import datetime
start = datetime.datetime.fromtimestamp(float(1485788515.0000))
end = datetime.datetime.fromtimestamp(float(1485788517.80000))
#Duration
duration = end - start
dur = str(duration).split(',')
print dur
# less then a day is not str() as 0days, ... so we fix that here by introducing artificial
# zero day if a split only retunrs 1 element
if len(dur) < 2:
d = ["0", dur[0]]
dur = d
dayHours = int(dur[0].replace('days',''))*24 # remove the days, mult with 24
hours = dur[1].split(':')[0] # get the partial hours of this part
minsSecs = ':'.join(dur[1].split(':')[1:]) # split+join the rest from the hours
# print all combined
print (str( dayHours+ int(hours) ) + ':' + minsSecs)
Output:
216:40:04.800000
Maybe better:
totSec = duration.total_seconds()
hours = totSec // (60*60)
mins = (totSec - (hours*60*60)) // 60
secs = totSec - (hours*60*60) - mins * 60
print "{:2}:{:2}:{:09.6f}".format(int(hours),int(mins),secs)
Really struggling with this one... Forgive the longish post.
I have an experiment that on each trial displays some stimulus, collects a response, and then moves on to the next trial.
I would like to incorporate an optimizer that runs in between trials and therefore must have a specific time-window designated by me to run, or it should be terminated. If it's terminated, I would like to return the last set of parameters it tried so that I can use it later.
Generally speaking, here's the order of events I'd like to happen:
In between trials:
Display stimulus ("+") for some number of seconds.
While this is happening, run the optimizer.
If the time for displaying the "+" has elapsed and the optimizer has
not finished, terminate the optimizer, return the most recent set of parameters it tried, and move on.
Here is some of the relevant code I'm working with so far:
do_bns() is the objective function. In it I include NLL['par'] = par or q.put(par)
from scipy.optimize import minimize
from multiprocessing import Process, Manager, Queue
from psychopy import core #for clock, and other functionality
clock = core.Clock()
def optim(par, NLL, q)::
a = minimize(do_bns, (par), method='L-BFGS-B', args=(NLL, q),
bounds=[(0.2, 1.5), (0.01, 0.8), (0.001, 0.3), (0.1, 0.4), (0.1, 1), (0.001, 0.1)],
options={"disp": False, 'maxiter': 1, 'maxfun': 1, "eps": 0.0002}, tol=0.00002)
if __name__ == '__main__':
print('starting optim')
max_time = 1.57
with Manager() as manager:
par = manager.list([1, 0.1, 0.1, 0.1, 0.1, 0.1])
NLL = manager.dict()
q = Queue()
p = Process(target=optim, args=(par, NLL, q))
p.start()
start = clock.getTime()
while clock.getTime() - start < max_time:
p.join(timeout=0)
if not p.is_alive():
break
if p.is_alive():
res = q.get()
p.terminate()
stop = clock.getTime()
print(NLL['a'])
print('killed after: ' + str(stop - start))
else:
res = q.get()
stop = clock.getTime()
print('terminated successfully after: ' + str(stop - start))
print(NLL)
print(res)
This code, on its own, seems to sort of do what I want. For example, the res = q.get() right above the p.terminate() actually takes something like 200ms so it will not terminate exactly at max_time if max_time < ~1.5s
If I wrap this code in a while-loop that checks to see if it's time to stop presenting the stimulus:
stim_start = clock.getTime()
stim_end = 5
print('showing stim')
textStim.setAutoDraw(True)
win.flip()
while clock.getTime() - stim_start < stim_end:
# insert the code above
print('out of loop')
I get weird behavior such as multiple iterations of the whole code from the beginning...
showing stim
starting optim
showing stim
out of loop
showing stim
out of loop
[1.0, 0.10000000000000001, 0.10000000000000001, 0.10000000000000001, 0.10000000000000001, 0.10000000000000001]
killed after: 2.81303440395
Note the multiple 'showing stim's' and 'out of loop's.
I'm open to any solution that accomplishes my goal :|
Help and thank you!
Ben
General remark
Your solution would give me nightmares! I don't see a reason to use multiprocessing here and i'm not even sure how you grab those updated results before termination. Maybe you got your reason for this approach, but i highly recommend something else (which has a limitation).
Callback-based approach
The general idea i would pursue is the following:
fire up your optimizer with some additional time-limit information and some callback enforcing this
the callback is called in each iteration of this optimizer
if time-limit reached: raise a customized Exception
The limits:
as the callback is only called once in each iteration, there is some limited sequence of points in time where the optimizer might get stopped
the potential difference is highly dependent on iteration-time for your problem! (numerical-differentiation, huge-data, slow function eval; all this matters)
if not exceeding some given time is of highest priority, this approach might be not right or you would need some kind of safeguarded interpolation to reason if one more iteration is possible in time
or: combine your kind of killing off workers with my approach of updating intermediate-results through some callback
Example code (bit hacky):
import time
import numpy as np
import scipy.sparse as sp
import scipy.optimize as opt
np.random.seed(1)
""" Fake task: sparse NNLS """
M, N, D = 2500, 2500, 0.1
A = sp.random(M, N, D)
b = np.random.random(size=M)
""" Optimization-setup """
class TimeOut(Exception):
"""Raise for my specific kind of exception"""
def opt_func(x, A, b):
return 0.5 * np.linalg.norm(A.dot(x) - b)**2
def opt_grad(x, A, b):
Ax = A.dot(x) - b
grad = A.T.dot(Ax)
return grad
def callback(x):
time_now = time.time() # probably not the right tool in general!
callback.result = [np.copy(x)] # better safe than sorry -> copy
if time_now - callback.time_start >= callback.time_max:
raise TimeOut("Time out")
def optimize(x0, A, b, time_max):
result = [np.copy(x0)] # hack: mutable type
time_start = time.time()
try:
""" Add additional info to callback (only takes x as param!) """
callback.time_start = time_start
callback.time_max = time_max
callback.result = result
res = opt.minimize(opt_func, x0, jac=opt_grad,
bounds=[(0, np.inf) for i in range(len(x0))], # NNLS
args=(A, b), callback=callback, options={'disp': False})
except TimeOut:
print('time out')
return result[0], opt_func(result[0], A, b)
return res.x, res.fun
print('experiment 1')
start_time = time.perf_counter()
x, res = optimize(np.zeros(len(b)), A, b, 0.1) # 0.1 seconds max!
end_time = time.perf_counter()
print(res)
print('used secs: ', end_time - start_time)
print('experiment 2')
start_time = time.perf_counter()
x_, res_ = optimize(np.zeros(len(b)), A, b, 5) # 5 seconds max!
end_time = time.perf_counter()
print(res_)
print('used secs: ', end_time - start_time)
Example output:
experiment 1
time out
422.392771467
used secs: 0.10226839151517493
experiment 2
72.8470708728
used secs: 0.3943936788825996
I need to know how to get the time elapsed between the edit_date(a column from one of my models) and datetime.now(). My edit_date column is under the DateTimeField format. (I'm using Python 2.7 and Django 1.10)
This is the function I'm trying to do:
def time_in_status(request):
for item in Reporteots.objects.exclude(edit_date__exact=None):
date_format = "%Y-%m-%d %H:%M:%S"
a = datetime.now()
b = item.edit_date
c = a - b
dif = divmod(c.days * 86400 + c.minute, 60)
days = str(dif)
print days
The only thing I'm getting from this fuction are the minutes elapsed and seconds. What I need is to get this date in the following format:
Time_elapsed = 3d 47m 23s
Any ideas? let me know if I'm not clear of if you need more information
Thanks for your attention,
Take a look at dateutil.relativedelta:
http://dateutil.readthedocs.io/en/stable/relativedelta.html
from dateutil.relativedelta import relativedelta
from datetime import datetime
now = datetime.now()
ago = datetime(2017, 2, 11, 13, 5, 22)
diff = relativedelta(ago, now)
print "%dd %dm %ds" % (diff.days, diff.minutes, diff.seconds)
I did that code from memory, so you may have to tweak it to your needs.
Try something like
c = a - b
minutes = (c.seconds % 3600) // 60
seconds = c.seconds % 60
print "%sd %sm %ss" % (c.days, minutes, seconds)
I'm writting a python code to get percentage of each bytes contained in a file. Then check if percentage is less than a given limit and display the byte value (as hex) + percentage if over.
My code works great but it is very time consuming. It take approx 1 minute for a 190KB file.
import time
def string2bytes(data):
return "".join("{:02x}".format(ord(c)) for c in data)
startTime = time.time()
# get datas from file
f = open("myfile.bin","rb")
filedata = f.read()
size = f.tell()
f.close
# count each data, check percentage and store to a dictionnary if upper than 0.50%
ChkResult = True
r = {}
for data in filedata:
c = float(filedata.count(data)) / size * 100
if c > 0.50:
ChkResult = False
tag = string2bytes(data).upper()
r[tag] = c
# print result
if ChkResult:
print "OK"
else:
print "DANGER!"
print "Any bytes be less than 0.50%%."
for x in sorted(r.keys()):
print " 0x%s is %.2f%%"%((x), r[x])
print "Done in %.2f seconds."%(time.time() - startTime)
Do you have any idea to reduce this time with same result? Staying with python 2.7.x (for many reasons).
Many thanks.
Use Counter[docs] to prevent O(n^2) time:
You are calling count n times. count is O(n).
import time
from collections import Counter
def string2bytes(data):
return "".join("{:02x}".format(ord(c)) for c in data)
startTime = time.time()
# get datas from file
f = open("myfile.bin","rb")
filedata = f.read()
size = f.tell()
f.close
# count each data, check percentage and store to a dictionnary if upper than 0.50%
ChkResult = True
r = {}
for k,v in Counter(filedata).items():
c = float(v) / size * 100
if c > 0.50:
ChkResult = False
tag = string2bytes(k).upper()
r[tag] = c
# print result
if ChkResult:
print "OK"
else:
for x in sorted(r.keys()):
print " 0x%s is %.2f%%"%((x), r[x])
print "Done in %.2f seconds."%(time.time() - startTime)
or slightly more succinctly:
import time
from collections import Counter
def fmt(data):
return "".join("{:02x}".format(ord(c)) for c in data).upper()
def pct(v, size):
return float(v) / size * 100
startTime = time.time()
with open("myfile.bin","rb") as f:
counts = Counter(f.read())
size = f.tell()
threshold = size * 0.005
err = {fmt(k):pct(v, size) for k,v in counts.items() if v > threshold }
if not err:
print "OK"
else:
for k,v in sorted(err.items()):
print " 0x{} is {:.2f}%".format(k, v)
print "Done in %.2f seconds."%(time.time() - startTime)
If there is a need for speed:
I was curious so I tried a homespun version of counter. I actually thought it would Not be faster but I am getting better performance than collections.Counter.
import collections
def counter(s):
'''counts the (hashable) things in s
returns a collections.defaultdict -> {thing:count}
'''
a = collections.defaultdict(int)
for c in s:
a[c] += 1
return a
This could be substituted into #DTing s solution - I wouldn't change any of that.
Guess it wasn't homespun at all, it is listed in the defaultdict examples in the docs.