I have sript for 3 lcd display on my rassbery pi, and have problem with
time sleep function.
I would like to have a pause only for screen 3 ( after 10s show different data ), and the other LCD should run normaly, without time sleep - pause
example :
# LCD1
mylcd1.lcd_display_string("TEMP1:",1,0)
mylcd1.lcd_display_string(str(temp1),1,5)
mylcd1.lcd_display_string ("%s" %time.strftime("%H:%M:%S"),3, 0)
# LCD2
mylcd2.lcd_display_string("TEMP2:",1,0)
mylcd2.lcd_display_string(str(temp2),1,5)
# LCD3
mylcd3.lcd_clear()
mylcd3.lcd_display_string("TEMP3:",1,0)
mylcd3.lcd_display_string(str(temp3),1,5)
time.sleep(10)
mylcd3.lcd_clear()
mylcd3.lcd_display_string("DIM:",1,0)
mylcd3.lcd_display_string(str(dp1),1,4)
In this example is problem time in LCD1, It does not run smoothly, it has to wait 10s, and temperature data on LCD 1 and LCD 2 it must be refreshed in real time without delay....
Thank you for help!
I would say an easy way to go about this is have your LCD's that need to be updated running in a loop, and have a variable that is keeping track of the time.
Then within your loop have an if statement checking if the time % 10 seconds == 0 then run the refresh of the Screen that needs the delay.
I forgot to write .... the compete code is already in loop...
I want add time.sleep only for part for LCD3 ...
LCD3 change data every 10 sec.
Related
This may be a strange request. I have an infinite While loop and each loop lasts ~7 minutes, then the program sleeps for a couple minutes to let the computer cool down, and then starts over.
This is how it looks:
import time as t
t_cooling = 120
while True:
try:
#7 minutes of uninterrupted calculations here
t.sleep(t_cooling)
except KeyboardInterrupt:
break
Right now if I want to interrupt the process, I have to wait until the program sleeps for 2 minutes, otherwise all the calculations done in the running cycle are wasted. Moreover the calculations involve writing on files and working with multiprocessing, so interrupting during the calculation phase is not only a waste, but can potentially damage the output on the files.
I'd like to know if there is a way to signal to the program that the current cycle is the last one it has to execute, so that there is no risk of interrupting at the wrong moment. To add one more limitation, it has to be a solution that works via command line. It's not possible to add a window with a stop button on the computer the program is running on. The machine has a basic Linux installation, with no graphical environment. The computer is not particularly powerful or new and I need to use the most CPU and RAM possible.
Hope everything is clear enough.
Not so elegant, but it works
#!/usr/bin/env python
import signal
import time as t
stop = False
def signal_handler(signal, frame):
print('You pressed Ctrl+C!')
global stop
stop = True
signal.signal(signal.SIGINT, signal_handler)
print('Press Ctrl+C')
t_cooling = 1
while not stop:
t.sleep(t_cooling)
print('Looping')
You can use a separate Thread and an Event to signal the exit request to the main thread:
import time
import threading
evt = threading.Event()
def input_thread():
while True:
if input("") == "quit":
evt.set()
print("Exit requested")
break
threading.Thread(target=input_thread).start()
t_cooling = 5
while True:
#7 minutes of uninterrupted calculations here
print("starting calculation")
time.sleep(5)
if evt.is_set():
print("exiting")
break
print("cooldown...")
time.sleep(t_cooling)
Just for completeness, I post here my solution. It's very raw, but it works.
import time as t
t_cooling = 120
while True:
#7 minutes of uninterrupted calculations here
f = open('stop', 'r')
stop = f.readline().strip()
f.close()
if stop == '0':
t.sleep(t_cooling)
else:
break
I just have to create a file named stop and write a 0 in it. When that 0 is changed to something else, the program stops at the end of the cycle.
I am new to python and psychopy, however I have vast experience in programming and in designing experiments (using Matlab and EPrime). I am running an RSVP (rapid visual serial presentation) experiment with displays a different visual stimuli every X ms (X is an experimental variable, can be from 100 ms to 1000 ms). As this is a physiological experiment, I need to send triggers over the parallel port exactly on stimulus onset. I test the sync between triggers and visual onset using an oscilloscope and photosensor. However, when I send my trigger before or after the win.flip(), even with the window waitBlanking=False parameter then I still get a difference between the onset of the stimuli and the onset of the code.
Attached is my code:
im=[]
for pic in picnames:
im.append(visual.ImageStim(myWin,image=pic,pos=[0,0],autoLog=True))
myWin.flip() # to get to the next vertical blank
while tm < and t < len(codes):
im[tm].draw()
parallel.setData(codes[t]) # before
myWin.flip()
#parallel.setData(codes[t]) # after
ttime.append(myClock.getTime())
core.wait(0.01)
parallel.setData(0)
dur=(myClock.getTime()-ttime[t])*1000
while dur < stimDur-frameDurAvg+1:
dur=(myClock.getTime()-ttime[t])*1000
t=t+1
tm=tm+1
myWin.flip()
How can I sync my stimulus onset to the trigger? I'm not sure if this is a graphics card issue (I'm using a LCD ACER screen with the onboard Intel graphics card). Many thanks,
Shani
win.flip() waits for next monitor update. This means that the next line after win.flip() is executed almost exactly when the monitor begins drawing the frame. That's where you want to send your trigger. The line just before win.flip() is potentially almost one frame earlier, e.g. 16.7 ms on a 60Hz monitor so your trigger would arrive too early.
There are two almost identical ways to do it. Let's start with the most explicit:
for i in range(10):
win.flip()
# On the first flip
if i == 0:
parallel.setData(255)
core.wait(0.01)
parallel.setData(0)
... so the signal is sent just after the image has been pushed to the monitor.
The slightly more timing-accurate way to do it will save you like 0.01 ms (plus minus an order of magnitude). Somewhere early in the script define
def sendTrigger(code):
parallel.setData(code)
core.wait(0.01)
parallel.setData(0)
Then do
win.callOnFlip(sendTrigger, code=255)
for i in range(10):
win.flip()
This will call the function just after the first flip, before psychopy does a bit of housecleaning. So the function could have been called win.callOnNextFlip since it's only executed on the first following flip.
Again, this difference in timing is so miniscule compared to other factors that this is not really a question of a performance but rather of style preferences.
There is a hidden timing variable that is usually ignored - the monitor input lag, and I think this is the reason for the delay. Put simply, the monitor needs some time to display the image even after getting the input from the graphics card. This delay has nothing to do with the refresh rate (how many times the screen switches buffer), or the response time of the monitor.
In my monitor, I find a delay of 23ms when I send a trigger with callOnFlip(). How I correct it is: floor(23/16.667) = 1, and 23%16.667 = 6.333. So I call the callOnFlip on the second frame, wait 6.3 ms and trigger the port. This works. I haven't tried with WaitBlanking=True, which waits for the blanking start from the graphics card, as that gives me some more time to prepare the next buffer already. However, I think that even with WaitBlanking=True the effect will be there. (More after testing!)
Best,
Suddha
There is at least one routine that you can use to normalized the trigger delay to your screen refreshing rate. I just tested it with a photosensor cell and I went from a mean delay of 13 milliseconds (sd = 3.5 ms) between the trigger and the stimulus display, to a mean delay of 4.8 milliseconds (sd = 3.1 ms).
The procedure is the following :
Compute the mean duration between two displays. Say your screen has a refreshing rate of 85.05 (this is my case). This means that there is mean duration of 1000/85.05 = 11.76 milliseconds between two refreshes.
Just after you called win.flip(), wait for this averaged delay before you send your trigger : core.wait(0.01176).
This will not ensure that all your delays now equal zero, since you cannot master the synchronization between the win.flip() command and the current state of your screen, but it will center the delay around zero. At least, it did for me.
So the code could be updated as following :
refr_rate = 85.05
mean_delay_ms = (1000 / refr_rate)
mean_delay_sec = mean_delay_ms / 1000 # Psychopy needs timing values in seconds
def send_trigger(port, value):
core.wait(mean_delay_sec)
parallel.setData(value)
core.wait(0.001)
parallel.setData(0)
[...]
stimulus.draw()
win.flip()
send_trigger(port, value)
[...]
I wanted to run a code (or an external executable) for a specified amount of time. For example, in Fortran I can
call system('./run')
Is there a way I can restrict its run to let's say 10 seconds, for example as follows
call system('./run', 10)
I want to do it from inside the Fortran code, example above is for system command, but I want to do it also for some other subroutines of my code. for example,
call performComputation(10)
where performComputation will be able to run only for 10 seconds. The system it will run on is Linux.
thanks!
EDITED
Ah, I see - you want to call a part of the current program a limited time. I see a number of options for that...
Option 1
Modify the subroutines you want to run for a limited time so they take an additional parameter, which is the number of seconds they may run. Then modify the subroutine to get the system time at the start, and then in their processing loop get the time again and break out of the loop and return to the caller if the time difference exceeds the maximum allowed number of seconds.
On the downside, this requires you to change every subroutine. It will exit the subroutine cleanly though.
Option 2
Take advantage of a threading library - e.g. pthreads. When you want to call a subroutine with a timeout, create a new thread that runs alongside your main program in parallel and execute the subroutine inside that thread of execution. Then in your main program, sleep for 10 seconds and then kill the thread that is running your subroutine.
This is quite easy and doesn't require changes to all your subroutines. It is not that elegant in that it chops the legs off your subroutine at some random point, maybe when it is least expecting it.
Imagine time running down the page in the following example, and the main program actions are on the left and the subroutine actions are on the right.
MAIN SUBROUTINE YOUR_SUB
... something ..
... something ...
f_pthread_create(,,,YOUR_SUB,) start processing
sleep(10) ... calculate ...
... calculate ...
... calculate ...
f_pthread_kill()
... something ..
... something ...
Option 3
Abstract out the subroutines you want to call and place them into their own separate executables, then proceed as per my original answer below.
Whichever option you choose, you are going to have to think about how you get the results from the subroutine you are calling - will it store them in a file? Does the main program need to access them? Are they in global variables? The reason is that if you are going to follow options 2 or 3, there will not be a return value from the subroutine.
Original Answer
If you don't have timeout, you can do
call system('./run & sleep 10; kill $!')
Yes there is a way. take a look at the linux command timeout
# run command for 10 seconds and then send it SIGTERM kill message
# if not finished.
call system('timeout 10 ./run')
Example
# finishes in 10 seconds with a return code of 0 to indicate success.
sleep 10
# finishes in 1 second with a return code of `124` to indicate timed out.
timeout 1 sleep 10
You can also choose the type of kill signal you want to send by specifying the -s parameter. See man timeout for more info.
I have done a memory tiles program but i want it to be timed, i.e, the user shoud be able to play the game only for 2 mins. what do i do?
Also in linux sleep() does not work, what should we use for a delay??
I presume the game has a "main loop" somewhere.
At the beginning of the main loop (before the actual loop), take the current time, call this start_time. Then in each iteration of the loop, take the current time again, call this now. The elapsed time is elapsed_time = now - start_time;. Assuming time is in seconds, then if (elapsed_time >= 120) { ... end game ... } would do the trick.
I have a program that runs every 5 minutes when the stock market is open, which it does by running once, then entering the following function, which returns once 5 minutes has passed if the stock market is open.
What I don't understand, is that after a period of time, usually about 18 or 19 hours, it crashes returning a sigsegv error. I have no idea why, as it isn't writing to any memory - although I don't know much about the systemtime type, so maybe that's it?
Anyway, any help you could give would be very much appreciated! Thanks in advance!!
void KillTimeUntilNextStockDataReleaseOnWeb()
{
SYSTEMTIME tLocalTimeNow;
cout<<"\n*****CHECKING IF RUN HAS JUST COMPLETED OR NOT*****\n";
GetLocalTime(&tLocalTimeNow);//CHECK IF A RUN HAS JUST COMPLETED. IF SO, AWAIT NEXT 5 MINUTE MARK
while((tLocalTimeNow.wMinute % 5)==0)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****AWAITING 5 MINUTE MARK TO UPDATE STOCK DATA*****\n";
GetLocalTime(&tLocalTimeNow);//LOOP THROUGH THIS SECTION, CHECKING CURRENT TIME, UNTIL 5 MINUTE UPDATE. THEN PROCEED
while((tLocalTimeNow.wMinute % 5)!=0)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****CHECKING IF MARKET IS OPEN*****\n";
//CHECK IF STOCK MARKET IS EVEN OPEN. IF NOT, REPEAT
GetLocalTime(&tLocalTimeNow);
while((tLocalTimeNow.wHour < 8)||(tLocalTimeNow.wHour) > 17)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****PROGRAM CONTINUING*****\n";
return;
}
If you want to "wait for X seconds", then the Windows system call Sleep(x) will sleep for x milliseconds. Note however, if you sleep for, say, 300s, after some operation that took 3 seconds, that would mean you drift 3 seconds every 5minutes - it may not matter, but if it's critical that you keep the same timing all the time, you should figure out [based on time or some such function] how long it is to the next boundary, and then sleep that amount [possibly run a bit short and then add another check and sleep if you woke up early]. If "every five minutes" is more of an approximate thing, then 300s is fine.
There are other methods to wait for a given amount of time, but I suspect the above is sufficient.
Instead of using a busy loop, or even Sleep() in a loop, I would suggest using a Waitable Timer instead. That way, the calling thread can sleep effectively while it is waiting, while still providing a mechanism to "wake up" early if needed.