Converting Tkinter Event Time to System Time - python-2.7

I'm trying to see how recently an Event occurred (so that I can ignore the backlog of events that built up while a first event is being processed.) I see that events have a time attribute in milliseconds, but it doesn't line up with the system time I get from calling time.time(). Does anyone know how to convert between the two? Thanks!
Example
from Tkinter import Tk, Label
from time import time
def print_fn(event): print event.time, time()
app = Tk()
label = Label(app, text='Click Here!')
label.bind('<Button>', print_fn)
label.pack()
app.mainloop()
Output
1430467703 1360190553.41

The event.time attribute would be useful for determining the time between two Tkinter events.
event.time
This attribute is set to an integer which has no absolute meaning, but
is incremented every millisecond. This allows your application to
determine, for example, the length of time between two mouse clicks.
time.time
Return the time in seconds since the epoch as a floating point number.
Note that even though the time is always returned as a floating point
number, not all systems provide time with a better precision than 1
second. While this function normally returns non-decreasing values, it
can return a lower value than a previous call if the system clock has
been set back between the two calls.
To measure how much time has elapsed we generally use time.time or time.clock like this:
start = time.clock()
somefunction()
elapsed = time.clock() - start
You wouldn't have to use event.time at all.
More info about this can be found here: link

Related

How to send a code to the parallel port in exact sync with a visual stimulus in Psychopy

I am new to python and psychopy, however I have vast experience in programming and in designing experiments (using Matlab and EPrime). I am running an RSVP (rapid visual serial presentation) experiment with displays a different visual stimuli every X ms (X is an experimental variable, can be from 100 ms to 1000 ms). As this is a physiological experiment, I need to send triggers over the parallel port exactly on stimulus onset. I test the sync between triggers and visual onset using an oscilloscope and photosensor. However, when I send my trigger before or after the win.flip(), even with the window waitBlanking=False parameter then I still get a difference between the onset of the stimuli and the onset of the code.
Attached is my code:
im=[]
for pic in picnames:
im.append(visual.ImageStim(myWin,image=pic,pos=[0,0],autoLog=True))
myWin.flip() # to get to the next vertical blank
while tm < and t &lt len(codes):
im[tm].draw()
parallel.setData(codes[t]) # before
myWin.flip()
#parallel.setData(codes[t]) # after
ttime.append(myClock.getTime())
core.wait(0.01)
parallel.setData(0)
dur=(myClock.getTime()-ttime[t])*1000
while dur < stimDur-frameDurAvg+1:
dur=(myClock.getTime()-ttime[t])*1000
t=t+1
tm=tm+1
myWin.flip()
How can I sync my stimulus onset to the trigger? I'm not sure if this is a graphics card issue (I'm using a LCD ACER screen with the onboard Intel graphics card). Many thanks,
Shani
win.flip() waits for next monitor update. This means that the next line after win.flip() is executed almost exactly when the monitor begins drawing the frame. That's where you want to send your trigger. The line just before win.flip() is potentially almost one frame earlier, e.g. 16.7 ms on a 60Hz monitor so your trigger would arrive too early.
There are two almost identical ways to do it. Let's start with the most explicit:
for i in range(10):
win.flip()
# On the first flip
if i == 0:
parallel.setData(255)
core.wait(0.01)
parallel.setData(0)
... so the signal is sent just after the image has been pushed to the monitor.
The slightly more timing-accurate way to do it will save you like 0.01 ms (plus minus an order of magnitude). Somewhere early in the script define
def sendTrigger(code):
parallel.setData(code)
core.wait(0.01)
parallel.setData(0)
Then do
win.callOnFlip(sendTrigger, code=255)
for i in range(10):
win.flip()
This will call the function just after the first flip, before psychopy does a bit of housecleaning. So the function could have been called win.callOnNextFlip since it's only executed on the first following flip.
Again, this difference in timing is so miniscule compared to other factors that this is not really a question of a performance but rather of style preferences.
There is a hidden timing variable that is usually ignored - the monitor input lag, and I think this is the reason for the delay. Put simply, the monitor needs some time to display the image even after getting the input from the graphics card. This delay has nothing to do with the refresh rate (how many times the screen switches buffer), or the response time of the monitor.
In my monitor, I find a delay of 23ms when I send a trigger with callOnFlip(). How I correct it is: floor(23/16.667) = 1, and 23%16.667 = 6.333. So I call the callOnFlip on the second frame, wait 6.3 ms and trigger the port. This works. I haven't tried with WaitBlanking=True, which waits for the blanking start from the graphics card, as that gives me some more time to prepare the next buffer already. However, I think that even with WaitBlanking=True the effect will be there. (More after testing!)
Best,
Suddha
There is at least one routine that you can use to normalized the trigger delay to your screen refreshing rate. I just tested it with a photosensor cell and I went from a mean delay of 13 milliseconds (sd = 3.5 ms) between the trigger and the stimulus display, to a mean delay of 4.8 milliseconds (sd = 3.1 ms).
The procedure is the following :
Compute the mean duration between two displays. Say your screen has a refreshing rate of 85.05 (this is my case). This means that there is mean duration of 1000/85.05 = 11.76 milliseconds between two refreshes.
Just after you called win.flip(), wait for this averaged delay before you send your trigger : core.wait(0.01176).
This will not ensure that all your delays now equal zero, since you cannot master the synchronization between the win.flip() command and the current state of your screen, but it will center the delay around zero. At least, it did for me.
So the code could be updated as following :
refr_rate = 85.05
mean_delay_ms = (1000 / refr_rate)
mean_delay_sec = mean_delay_ms / 1000 # Psychopy needs timing values in seconds
def send_trigger(port, value):
core.wait(mean_delay_sec)
parallel.setData(value)
core.wait(0.001)
parallel.setData(0)
[...]
stimulus.draw()
win.flip()
send_trigger(port, value)
[...]

measuring concurent loop times in erlang

I create a round of processes in erlang and wish to measure the time that it took for the first message to pass throigh the network and the entire message series, each time the first node gets the message back it sends another one.
right now in the first node i have the following code:
receive
stop->
io:format("all processes stopped!~n"),
true;
start->
statistics(runtime),
Son!{number, 1},
msg(PID, Son, M, 1);
{_, M} ->
{Time1, _} = statistics(runtime),
io:format("The last message has arrived after ~p! ~n",[Time1*1000]),
Son!stop;
of course i start the statistics when sending the first message.
as you can see i use the Time_Since_Last_Call for the first message loop and wish to use the Total_Run_Time for the entire run, the problem is that Total_Run_Time is accumulative since i start the statistics for the first time.
The second thought i had in mind is using another process with 2 receive loops getting the times for each one adding them and printing but i'm sure that erlang can do better than this.
i guess the best method to solve this is somehow flush the Total_Run_Time, but i couldn't find how this could be done. any ideas how this can be tackled?
One way to measure round-trip times would be to send a timestamp along with each message. When the first node receives the message, it can then measure the round-trip time, calculating Total_Run_Time - Timestamp.
To calculate the total run time, I would memorize the first timestamp in the process state (or dictionary), and calculate the total run time when stopping the test.
Besides, given that you mention the network, are you sure that the CPU time (which is what statistics(runtime) calculates is what you're after? Perhaps, wall clock time would be more appropriate.

Scheduling reset every 24 hours at midnight

I have a counter "numberOrders" and i want to reset it everyday at midnight, to know how many orders I get in one day, what I have right now is this:
val system = akka.actor.ActorSystem("system")
system.scheduler.schedule(86400000 milliseconds, 0 milliseconds){(numberOrders = 0)}
This piece of code is inside a def which is called every time i get a new order, so want it does is: reset numberOrders after 24hours from the first order or from every order, I'm not really sure if every time there's a new order is going to reset after 24 hours, which is not what I want. I want to rest the variable everyday at midnight, any idea? Thanks!
To further increase pushy's answer. Since you might not always be sure when the site started and if you want to be exactly sure it runs at midnight you can do the following
val system = akka.actor.ActorSystem("system")
val wait = (24 hours).toMillis - System.currentTimeMillis
system.scheduler.schedule(Duration.apply(wait, MILLISECONDS), 24 hours, orderActor, ResetCounterMessage)
Might not be the tidiest of solutions but it does the job.
As schedule supports repeated executions, you could just set the interval parameter to 24 hours, the initial delay to the amount of time between now and midnight, and initiate the code at startup. You seem to be creating a new actorSystem every time you get an order right now, that does not seem quite right, and you would be rid of that as well.
Also I would suggest using the schedule method which sends messages to actors instead. This way the actor that processes the order could keep count, and if it receives a ResetCounter message it would simply reset the counter. You could simply write:
system.scheduler.schedule(x seconds, 24 hours, orderActor, ResetCounterMessage)
when you start up your actor system initially, and be done with it.

How can I periodically execute some function if this function takes along time to run (less than peroid)

I want to run a function for example func() exactly 1 time per second. However the running time of func() is about 500 ms. How Can I do that? I know if the running time of the function is low, I can write a while loop in func() and sleep() for 1 second after each execution. But now, the running time is high. What should I do to ensure the func() run exactly 1 time per second? Thanks.
Yo do:
Take the current time in start_time.
Perform your job
Take the current time in end_time
Wait for (1 second + start_time - end_time)
That way, you can perform your tasks every seconds reliably. If the task takes less time, you will wait longer and vice versa. Note however that this assumes that your task takes always less than 1 sec. to execute. In the real code, you want to check for that before the sleep statement.
Implementation details depend on the platform.
Note that using this method still results in a small drift due to the time it takes to compute step 4. A more accurate alternative would be to synchronize on integer multiple of one second. That way, over 1000s of cycles you would not drift.
It depends on the level of accuracy you need.
If you want a brute, easy to code solution, you can get the time before first run of the function and save it in some variable (start_time). Create repeat index count variable (repeat_number) that stores next repeat number. Then you can do kinda this:
1) next_run_time = ++repeat_number*1sec + start_time;
2) func();
3) wait_time = next_run_time - current_time;
4) sleep(wait_time)
5) goto 1;
This approach disables accumulation of time error on each iteration.
But for the real application you should find some event framework or library.

How to only get a key entry once per second? (or delay the time in between two keyboard entry)

In pygame, I am using function "pressed_key"
This is my Code:
if(pressed_keys[K_y]):
base += 10;
But when I do it by pressing it only once, the "base" increased 200ish. I want to know if there is a way to increase the time between two entry?
Thanks for helping!
(p.s. I really dont know how to search similar questions on this question. I hope this is not duplicate. But in case it is, let me know. I will delete this question. Thanks again!)
Here http://www.pygame.org/docs/ref/key.html#pygame.key.set_repeat
pygame.key.set_repeat(delay, interval): return None
also:
pygame.key.get_pressed()[K_y]: return bool
another way is to get the time you accepted the "key pressing" ,and wait before accepting it again:
import time
interval = 100 #you set your interval in miliseconds
lasttime = 0
while 1:
draw() #draw routine
events() #another events
now = time.time() #save in one variable if you are going to test against more than one, reducing the number of time.time() calls
if(pressed_keys[K_y] and (now-lasttime)>interval):
lasttime = now
base += 10
time.time() Return the time in seconds since the epoch as a floating point number.
The epoch is the point where the time starts. On January 1st of that year, at 0 hours, the “time since the epoch” is zero. For Unix, the epoch is 1970.
knowing that, you are getting the time right now against the lasttime you saved it:
now-lasttime. When this delta is more than the interval, you are allowed to continue your event, don't forget to update your lasttime variable.
I hope you know enough about pygame to use a clock.
(For simplicity's sake we'll say the time interval required will be one second)
A simple solution would be to only check for input every second, using a simple counter and the pygame clock.
First off start the clock and the counter, outside of your main loop.
Also, add a boolean variable to determine if the key was pressed within this second.
FRAMERATE = 30 #(The framerate used in this example is 30 FPS)
clock = pygame.time.Clock()
counter = 0
not_pressed = True
Then inside the main loop, the first thing you do is increase the counter, then tick the clock.
while argument:
counter+=1
clock.tick(FRAMERATE)
Then were you have your code, an if statement to see if the button has been pressed this second:
if not_pressed:
if(pressed_keys[K_y]):
not_pressed=False
base += 10
#Rest of code:
if(pressed_keys[K_up]):
Finally, at the end of your main loop, add a checker to switch the boolean not_pressed back to True every second:
if counter == FRAMERATE:
counter=0
not_pressed=True
That should allow the program to only take input from the user once every second.
To change the interval, simply change the if counter == FRAMERATE: line.
if counter == FRAMERATE: would be 1 Second
if counter == (FRAMERATE*2): would be 2 Seconds
if counter == int(FRAMERATE/4): would be a quarter of a second*
*note- make sure you turn FRAMERATE divided by a number, into an integer, either by using int() surrounding the division, or by using integer division: (FRAMERATE//4)
For a similar example to see how everything fits, see this answer.
See also: Pygame: key.get_pressed() does not coincide with the event queue To use repeated movement while key is held down. Using state polling for those keys works better.