Stopping or cancelling queued keyboard commands in a program - python-2.7

I have a program written in python 2.7 which takes photos of a sample from 3 different cameras when the result value is typed into the program.
The USB controller bandwidth can't handle all cameras firing at the same time, so I have to call each one individually. This causes a delay between pressing the value and the preview of the pictures showing up.
During this delay, the program is still able to accept keyboard commands which are then addressed once the photos have been taken. This is causing issues, as sometimes, values are inputted twice, which means that the value is then applied to the next one after it has taken the photos for the first sample.
What I'm after is a way to disregard any queued keyboard commands whilst the program is working on the current command:
def selChange(self):
#Disable the textbox
self.valInput.configure(state='disabled')
#Gather pictures from cameras and store them in 2D list with sample result (This takes a second or two to complete)
self.gatherPictures()
if not int(self.SampleList.size()) == 0:
#clear texbox
self.valInput.delete(0,END)
#Create previews from 2D list
self.img1 = ImageTk.PhotoImage(self.dataList[int(self.SampleList.curselection()[0])][2].resize((250,250),Image.ANTIALIAS))
self.pic1.configure(image = self.img1)
self.img2 = ImageTk.PhotoImage(self.dataList[int(self.SampleList.curselection()[0])][3].resize((250,250),Image.ANTIALIAS))
self.pic2.configure(image = self.img2)
self.img3 = ImageTk.PhotoImage(self.dataList[int(self.SampleList.curselection()[0])][4].resize((250,250),Image.ANTIALIAS))
self.pic3.configure(image = self.img3)
self.img4 = ImageTk.PhotoImage(Image.open("Data/" + str(self.dataList[int(self.SampleList.curselection()[0])][1]) + ".jpg").resize((250,250),Image.ANTIALIAS))
self.pic4.configure(image = self.img4)
#Unlock textbox ready for next sample
self.valInput.configure(state='normal')
I was hoping that disabling the textbox and re-enabling it afterwards would work, but it doesn't. I wanted to use buttons, but they have insisted that it be typed to increase speed

Related

Problem with programming a basic hardware

I have an animation shown on LEDs. When the button is pressed, the animation has to stop and then continue after the button is pressed again.
There is a method that processes working with the button:
void checkButton(){
GPIO_PinState state;
state = HAL_GPIO_ReadPin(GPIOC, GPIO_PIN_15);
if (state == GPIO_PIN_RESET) {
while(1){
state = HAL_GPIO_ReadPin(GPIOC, GPIO_PIN_15);
if (state == GPIO_PIN_SET){
break;
}
}
//while (state == GPIO_PIN_RESET) {
//state = HAL_GPIO_ReadPin(GPIOC, GPIO_PIN_15);
//}
}
}
GPIO_PIN_SET is the default button position. GPIO_PIN_RESET is the condition when the button is pressed. The commented section is what I tried instead of the while(1){...} loop. The checkButton() method is called in the main loop from time to time to be run. The program runs on STM32 with an extension module (here the type of an extension module does not matter).
The fact is that this method stops animation just for a moment and does not work as I would like it to. Could you correct anything about this program to make it work properly?
Could you correct anything about this program to make it work
properly?
My guess is that you are trying to add a 'human interaction' aspect to your design. Your current approach relies on a single (button position) sample randomly timed by a) your application and b) a human finger. This timing is simply not reliable, but the correction is possibly not too difficult.
Note 1: A 'simple' mechanical button will 'bounce' during it's activation or release (yes, either way). This means that the value which the software 'sees' (in a few microseconds) is unpredictable for several (tbd) milliseconds(?) near the button push or release.
Note 2: Another way to look at this issue, is that your state value exists two places: in the physical button AND in the variable "GPIO_PinState state;". IMHO, a state value can only reside in one location. Two locations is always a mistake.
The solution, then (if you believe) is to decide to keep one state 'record', and eliminate the other. IMHO, I think you want to keep the button, which seems to be your human input. To be clear, you want to eliminate the variable "GPIO_PinState state;"
This line:
state = HAL_GPIO_ReadPin(GPIOC, GPIO_PIN_15);
samples the switch state one time.
HOWEVER, you already know that this design can not rely on the one read being correct. After all, your user might have just pressed or released the button, and it is simply bouncing at the time of the sample.
Before we get to accumulating samples, you should be aware that the bouncing can last much more than a few microseconds. I've seen some switches bounce up to 10 milliseconds or more. If test equipment is available, I would hook it up and take a look at the characteristics of your button. If not, well, you can try the adjusting the controls of the following sample accumulator.
So, how do we 'accumulate' enough samples to feel confident we can know the state of the switch?
Consider multiple samples, spaced-in-time by short delays (2 controls?). I think you can simply accumulate them. The first count to reach tbr - 5 (or 10 or 100?) samples wins. So spin sample, delay, and increment one of two counters:
stateCount [2] = {0,0}; // state is either set or reset, init both to 0
// vvv-------max samples
for (int i=0; i<100; ++i) // worst case how long does your switch bounce
{
int sample = HAL_GPIO_ReadPin(GPIOC, GPIO_PIN_15); // capture 1 sample
stateCount[sample] += 1; // increment based on sample
// if 'enough' samples are the same, kick out early
// v ---- how long does your switch bounce
if (stateCount[sample] > 5) break; // 5 or 10 or 100 ms
// to-be-determined --------vvv --- how long does switch bounce
std::this_thread::sleep_for(1ms); // 1, 3, 5 or 11 ms between samples
// C++ provides, but use what is available for your system
// and balanced with the needs of your app
}
FYI - The above scheme has 3 adjustments to handle different switch-bounce durations ... You have some experimenting to do. I would start with max samples at 20. I have no recommendation for sleep_for ... you provided no other info about your system.
Good luck.
It has been a long time, but I think I remember the push-buttons on a telecom infrastructure equipment bounced 5 to 15 ms.

pygame clock managed process accelerates and shouldn't

I'm doing an RPG in pygame and just added portals to get from a map to another one. The problem is that when I get back to the first map, somehow movement and animation of my player character accelerate a lot. This acceleration is increased each time that I get back and forth.
Time is managed with a pygame clock object at 32 ticks per second:
time_passed = clock.tick(32)
self.worlds[self.currentWorld].process(time_passed)
general process method:
def process(self, time_passed):
tps = time_passed/1000.0
for entity in self.entities.itervalues():
entity.process(tps)
process method for an entity:
def process(self, tps):
if self.location != self.destination and self.animseq != None:
self.tps += tps
if self.tps > 0.25:
self.tps -= 0.25
self.image_to_render1 += 1
if self.image_to_render1 > self.animn:
self.image_to_render1 = 0
"teleport" method
def changeWorld(self, target):
self.currentWorld = target
self.worlds[self.currentWorld].addEntity(self.player)
self.player.world = self.worlds[self.currentWorld]
self.player.location.x = 200
self.player.location.y = 200
self.player.reset()
The reset is what I first tried to solve the problem, it resets the animations and player associated time, but it didn't change anything. I wonder if I just got something wrong with the clock or if I should recreate one on teleport. I hope someone can give me a clue, thanks in advance.
It's a simple logic error – you forget to remove your player from previous "world" when teleporting:
def changeWorld(self, target):
self.currentWorld = target
self.worlds[self.currentWorld].addEntity(self.player)
# Where's deleteEntity on the old world?
so when he comes back, there's two of the same player in the world.entities list. And then it gets processed twice, and moves twice faster.
This kind of error would be very easy to catch using basic debugging - if you would just put logging in your loop and in player.process methods, you would clearly see something like this in the output:
starting tick
processing player
processing player <--- There is two of them where should be only one!
starting tick
processing player
processing player
starting tick
processing player
processing player
Next time try to use debugging (it is harder in visual applications, but not impossible) or logging to make sure every entity in your state is exactly what you expect it to be at every step, and when you find discrepancy, it will be much easier to find a source of it. Good luck!

How to send a code to the parallel port in exact sync with a visual stimulus in Psychopy

I am new to python and psychopy, however I have vast experience in programming and in designing experiments (using Matlab and EPrime). I am running an RSVP (rapid visual serial presentation) experiment with displays a different visual stimuli every X ms (X is an experimental variable, can be from 100 ms to 1000 ms). As this is a physiological experiment, I need to send triggers over the parallel port exactly on stimulus onset. I test the sync between triggers and visual onset using an oscilloscope and photosensor. However, when I send my trigger before or after the win.flip(), even with the window waitBlanking=False parameter then I still get a difference between the onset of the stimuli and the onset of the code.
Attached is my code:
im=[]
for pic in picnames:
im.append(visual.ImageStim(myWin,image=pic,pos=[0,0],autoLog=True))
myWin.flip() # to get to the next vertical blank
while tm < and t &lt len(codes):
im[tm].draw()
parallel.setData(codes[t]) # before
myWin.flip()
#parallel.setData(codes[t]) # after
ttime.append(myClock.getTime())
core.wait(0.01)
parallel.setData(0)
dur=(myClock.getTime()-ttime[t])*1000
while dur < stimDur-frameDurAvg+1:
dur=(myClock.getTime()-ttime[t])*1000
t=t+1
tm=tm+1
myWin.flip()
How can I sync my stimulus onset to the trigger? I'm not sure if this is a graphics card issue (I'm using a LCD ACER screen with the onboard Intel graphics card). Many thanks,
Shani
win.flip() waits for next monitor update. This means that the next line after win.flip() is executed almost exactly when the monitor begins drawing the frame. That's where you want to send your trigger. The line just before win.flip() is potentially almost one frame earlier, e.g. 16.7 ms on a 60Hz monitor so your trigger would arrive too early.
There are two almost identical ways to do it. Let's start with the most explicit:
for i in range(10):
win.flip()
# On the first flip
if i == 0:
parallel.setData(255)
core.wait(0.01)
parallel.setData(0)
... so the signal is sent just after the image has been pushed to the monitor.
The slightly more timing-accurate way to do it will save you like 0.01 ms (plus minus an order of magnitude). Somewhere early in the script define
def sendTrigger(code):
parallel.setData(code)
core.wait(0.01)
parallel.setData(0)
Then do
win.callOnFlip(sendTrigger, code=255)
for i in range(10):
win.flip()
This will call the function just after the first flip, before psychopy does a bit of housecleaning. So the function could have been called win.callOnNextFlip since it's only executed on the first following flip.
Again, this difference in timing is so miniscule compared to other factors that this is not really a question of a performance but rather of style preferences.
There is a hidden timing variable that is usually ignored - the monitor input lag, and I think this is the reason for the delay. Put simply, the monitor needs some time to display the image even after getting the input from the graphics card. This delay has nothing to do with the refresh rate (how many times the screen switches buffer), or the response time of the monitor.
In my monitor, I find a delay of 23ms when I send a trigger with callOnFlip(). How I correct it is: floor(23/16.667) = 1, and 23%16.667 = 6.333. So I call the callOnFlip on the second frame, wait 6.3 ms and trigger the port. This works. I haven't tried with WaitBlanking=True, which waits for the blanking start from the graphics card, as that gives me some more time to prepare the next buffer already. However, I think that even with WaitBlanking=True the effect will be there. (More after testing!)
Best,
Suddha
There is at least one routine that you can use to normalized the trigger delay to your screen refreshing rate. I just tested it with a photosensor cell and I went from a mean delay of 13 milliseconds (sd = 3.5 ms) between the trigger and the stimulus display, to a mean delay of 4.8 milliseconds (sd = 3.1 ms).
The procedure is the following :
Compute the mean duration between two displays. Say your screen has a refreshing rate of 85.05 (this is my case). This means that there is mean duration of 1000/85.05 = 11.76 milliseconds between two refreshes.
Just after you called win.flip(), wait for this averaged delay before you send your trigger : core.wait(0.01176).
This will not ensure that all your delays now equal zero, since you cannot master the synchronization between the win.flip() command and the current state of your screen, but it will center the delay around zero. At least, it did for me.
So the code could be updated as following :
refr_rate = 85.05
mean_delay_ms = (1000 / refr_rate)
mean_delay_sec = mean_delay_ms / 1000 # Psychopy needs timing values in seconds
def send_trigger(port, value):
core.wait(mean_delay_sec)
parallel.setData(value)
core.wait(0.001)
parallel.setData(0)
[...]
stimulus.draw()
win.flip()
send_trigger(port, value)
[...]

Synchronizing input pins in directshow

I am creating a directshow filter which's purpose is to take 3 input pins and create a video which shows alternately vidoe from the first source, the second source and the third source, in a fixed time internal.
So if i have three webcam connected to my filter, i want the final video for example to show 5 seconds of the first cam, five seconds of the second cam, and so on...
I have tried two approaches:
Approach one
I use a class TimeManager. This class has a function isItPinsTurn(pinname). This functions returns true or false regarding if the pin is supposed to send sample to the output. To do this the TimeManager creates a new thread which sleeps every x seconds.
After it slept it changes to the current active inputpin to the next.
The result is that every x seconds the isItPinSTurn(pinname) function returns another pin. This way every pin only seconds output to the outputpin when it is its turn, hence i get the desired videos with x intervalls between the input cam.
The problem with this approach
Sleep doesn't seem to work in directshow filters. I get a runtime error:
abort() has been called
Approach two
I use the samples GetMediaTime method and a buffer which keeps track of how much video samples in terms of its mediatime, has already been sent to the output pin. This is best illustrated with code:
void MyFilter::acceptFilterInput(LPCWSTR pinname, IMediaSample* sample)
{
mylogger->LogDebug("In acceptFIlterInput", L"D:\\TEMP\\yc.log");
if (wcscmp(pinname, this->currentInputPin) == 0)
{
outpin->Deliver(sample);
LONGLONG timestart;
LONGLONG timeend;
sample->GetTime(&timestart, &timeend);
*mediaTimeBuffer += timeend - timestart;
if (*mediaTimeBuffer > this->MEDIATIME)
{
this->SetNextPinActive(pinname);
*mediaTimeBuffer = 0;
}
}
}
When the filter starts the currentInputPin is set to pin0 (the first). Calls to acceptFilterInput (which is called by the the input pins receie function) adjust the mediaTimeBUffer with the size of the MediaSample-MediaTime. If this buffer is higher than MEDIATIME (which can for example be 5 (seconds)), the buffer is set back to zero and the next pin is set active.
Problems with this approach
I am not even sure if CMediaSample->GetMediaTime returns the data i need, as it seems to return negative numbers, which doesn't seem to make much sense. I didn't find useful information about the return value of GetMediaTime on the web.
You are expected to block execution (incoming calls to IPin::Receive) on input streams so that other streams could catch up on their own streaming threads. You typically achieve this by either using wait/synchronization APIs and functions, or by holding references on media samples so that input peer would block on empty allocator waiting for a media sample (buffer) to get available.
Yes Sleep works well, although polling is the worst of possible options.
Approach two does not make sense for me because I don't see any real synchronization there: there is no execution blocking, and there is no making pin active. You cannot force data on the input pin, you only can wait to get called with new media sample. So you should block accepting data on one input stream/pin until you get data on another.
Some useful relevant information on multiplexing:
How to make a DirectShow Muxer Filter - Part 1
How to make a DirectShow Muxer Filter - Part 2
GDCL MPEG-4 Multiplexer - available in source, and can multiplex data from 2+ streams

How can I use multiple timers to plan events in my program?

I am building a rockband-like program using C++ and SDL, and want to be able to time events so I can orchestrate a song in the program. Here is what I have accomplished so far:
4 circles which fall from the top of the window to the middle into 4 designated hitting spots.
The circles drop at random intervals (not using time, a random number generator determines how far from the top of the window they begin to fall)
I am able to determine when a note is hit, and a score is displayed in the top right hand corner
Simple sparks are applied around a marker to let you know a note was hit
I can open a file and read text from it
Now I want to be able to use that file to write songs for the program to read and execute. I was thinking something along the lines of "1g,2g,4y,3r etc. etc. etc." the numbers being milliseconds to wait until the next note and the letters designating which color should fall.
You don't really need (or want) multiple timers; just the single timer that drives your window refresh (at 30fps or whatever) is sufficient.
When you load in your song file, for each note in the song you should store the number of milliseconds that should elapse between the moment the song starts playing and the moment that particular note is played, e.g (pseudocode):
int millisCounter = 0;
int note, noteLengthMillis;
while(ReadNextNoteFromSongFile(note, noteLengthMillis))
{
songNoteRecordsVector.push_back(NoteRecord(millisCounter, note));
millisCounter += noteLengthMillis;
}
Then, when you start the game level going, at the instant the song starts playing, record the current time in milliseconds. You will use this value as your time-zero reference for as long as the song keeps playing.
Now at every video-frame (or indeed at any time), you can calculate the number of milliseconds until a given note will be played, relative to the current system-clock-time:
int NoteRecord :: GetMillisecondsUntilNoteIsPlayed(int songStartTimeMillis, int currentTimeMillis) const
{
return this->myNoteOffsetMillis - (currentTimeMillis - songStartTimeMillis);
}
Note that the value returned will be negative if the note's time-to-be-played has already passed.
Once you have that, it's just a matter of converting each note's current milliseconds-until-note-is-played result into a corresponding on-screen position, and you know where to draw the note-circle for the current frame:
int millisUntilNotePlayTime = note.GetMillisecondsUntilNoteIsPlayed(songStartTimeMillis, currentTimeMillis);
int circleY = someFixedOffsetY + (millisUntilNotePlayTime/(1000/pixelsScrolledPerSecond));
DrawCircleAt(circleX, circleY);
... and if the user presses a key, you can calculate how far off the user was from the correct time for a given note using the same function, e.g.:
int errorMillis = note.GetMillisecondsUntilNoteIsPlayed(songStartTimeMillis, currentTimeMillis);
if (errorMillis < -50)
{
printf("You're too slow!\n");
}
else if (errorMillis > 50)
{
printf("You jumped the gun!\n");
}
else
{
printf("Good job!\n");
}