I'm doing an RPG in pygame and just added portals to get from a map to another one. The problem is that when I get back to the first map, somehow movement and animation of my player character accelerate a lot. This acceleration is increased each time that I get back and forth.
Time is managed with a pygame clock object at 32 ticks per second:
time_passed = clock.tick(32)
self.worlds[self.currentWorld].process(time_passed)
general process method:
def process(self, time_passed):
tps = time_passed/1000.0
for entity in self.entities.itervalues():
entity.process(tps)
process method for an entity:
def process(self, tps):
if self.location != self.destination and self.animseq != None:
self.tps += tps
if self.tps > 0.25:
self.tps -= 0.25
self.image_to_render1 += 1
if self.image_to_render1 > self.animn:
self.image_to_render1 = 0
"teleport" method
def changeWorld(self, target):
self.currentWorld = target
self.worlds[self.currentWorld].addEntity(self.player)
self.player.world = self.worlds[self.currentWorld]
self.player.location.x = 200
self.player.location.y = 200
self.player.reset()
The reset is what I first tried to solve the problem, it resets the animations and player associated time, but it didn't change anything. I wonder if I just got something wrong with the clock or if I should recreate one on teleport. I hope someone can give me a clue, thanks in advance.
It's a simple logic error – you forget to remove your player from previous "world" when teleporting:
def changeWorld(self, target):
self.currentWorld = target
self.worlds[self.currentWorld].addEntity(self.player)
# Where's deleteEntity on the old world?
so when he comes back, there's two of the same player in the world.entities list. And then it gets processed twice, and moves twice faster.
This kind of error would be very easy to catch using basic debugging - if you would just put logging in your loop and in player.process methods, you would clearly see something like this in the output:
starting tick
processing player
processing player <--- There is two of them where should be only one!
starting tick
processing player
processing player
starting tick
processing player
processing player
Next time try to use debugging (it is harder in visual applications, but not impossible) or logging to make sure every entity in your state is exactly what you expect it to be at every step, and when you find discrepancy, it will be much easier to find a source of it. Good luck!
Related
I have an animation shown on LEDs. When the button is pressed, the animation has to stop and then continue after the button is pressed again.
There is a method that processes working with the button:
void checkButton(){
GPIO_PinState state;
state = HAL_GPIO_ReadPin(GPIOC, GPIO_PIN_15);
if (state == GPIO_PIN_RESET) {
while(1){
state = HAL_GPIO_ReadPin(GPIOC, GPIO_PIN_15);
if (state == GPIO_PIN_SET){
break;
}
}
//while (state == GPIO_PIN_RESET) {
//state = HAL_GPIO_ReadPin(GPIOC, GPIO_PIN_15);
//}
}
}
GPIO_PIN_SET is the default button position. GPIO_PIN_RESET is the condition when the button is pressed. The commented section is what I tried instead of the while(1){...} loop. The checkButton() method is called in the main loop from time to time to be run. The program runs on STM32 with an extension module (here the type of an extension module does not matter).
The fact is that this method stops animation just for a moment and does not work as I would like it to. Could you correct anything about this program to make it work properly?
Could you correct anything about this program to make it work
properly?
My guess is that you are trying to add a 'human interaction' aspect to your design. Your current approach relies on a single (button position) sample randomly timed by a) your application and b) a human finger. This timing is simply not reliable, but the correction is possibly not too difficult.
Note 1: A 'simple' mechanical button will 'bounce' during it's activation or release (yes, either way). This means that the value which the software 'sees' (in a few microseconds) is unpredictable for several (tbd) milliseconds(?) near the button push or release.
Note 2: Another way to look at this issue, is that your state value exists two places: in the physical button AND in the variable "GPIO_PinState state;". IMHO, a state value can only reside in one location. Two locations is always a mistake.
The solution, then (if you believe) is to decide to keep one state 'record', and eliminate the other. IMHO, I think you want to keep the button, which seems to be your human input. To be clear, you want to eliminate the variable "GPIO_PinState state;"
This line:
state = HAL_GPIO_ReadPin(GPIOC, GPIO_PIN_15);
samples the switch state one time.
HOWEVER, you already know that this design can not rely on the one read being correct. After all, your user might have just pressed or released the button, and it is simply bouncing at the time of the sample.
Before we get to accumulating samples, you should be aware that the bouncing can last much more than a few microseconds. I've seen some switches bounce up to 10 milliseconds or more. If test equipment is available, I would hook it up and take a look at the characteristics of your button. If not, well, you can try the adjusting the controls of the following sample accumulator.
So, how do we 'accumulate' enough samples to feel confident we can know the state of the switch?
Consider multiple samples, spaced-in-time by short delays (2 controls?). I think you can simply accumulate them. The first count to reach tbr - 5 (or 10 or 100?) samples wins. So spin sample, delay, and increment one of two counters:
stateCount [2] = {0,0}; // state is either set or reset, init both to 0
// vvv-------max samples
for (int i=0; i<100; ++i) // worst case how long does your switch bounce
{
int sample = HAL_GPIO_ReadPin(GPIOC, GPIO_PIN_15); // capture 1 sample
stateCount[sample] += 1; // increment based on sample
// if 'enough' samples are the same, kick out early
// v ---- how long does your switch bounce
if (stateCount[sample] > 5) break; // 5 or 10 or 100 ms
// to-be-determined --------vvv --- how long does switch bounce
std::this_thread::sleep_for(1ms); // 1, 3, 5 or 11 ms between samples
// C++ provides, but use what is available for your system
// and balanced with the needs of your app
}
FYI - The above scheme has 3 adjustments to handle different switch-bounce durations ... You have some experimenting to do. I would start with max samples at 20. I have no recommendation for sleep_for ... you provided no other info about your system.
Good luck.
It has been a long time, but I think I remember the push-buttons on a telecom infrastructure equipment bounced 5 to 15 ms.
I have a program written in python 2.7 which takes photos of a sample from 3 different cameras when the result value is typed into the program.
The USB controller bandwidth can't handle all cameras firing at the same time, so I have to call each one individually. This causes a delay between pressing the value and the preview of the pictures showing up.
During this delay, the program is still able to accept keyboard commands which are then addressed once the photos have been taken. This is causing issues, as sometimes, values are inputted twice, which means that the value is then applied to the next one after it has taken the photos for the first sample.
What I'm after is a way to disregard any queued keyboard commands whilst the program is working on the current command:
def selChange(self):
#Disable the textbox
self.valInput.configure(state='disabled')
#Gather pictures from cameras and store them in 2D list with sample result (This takes a second or two to complete)
self.gatherPictures()
if not int(self.SampleList.size()) == 0:
#clear texbox
self.valInput.delete(0,END)
#Create previews from 2D list
self.img1 = ImageTk.PhotoImage(self.dataList[int(self.SampleList.curselection()[0])][2].resize((250,250),Image.ANTIALIAS))
self.pic1.configure(image = self.img1)
self.img2 = ImageTk.PhotoImage(self.dataList[int(self.SampleList.curselection()[0])][3].resize((250,250),Image.ANTIALIAS))
self.pic2.configure(image = self.img2)
self.img3 = ImageTk.PhotoImage(self.dataList[int(self.SampleList.curselection()[0])][4].resize((250,250),Image.ANTIALIAS))
self.pic3.configure(image = self.img3)
self.img4 = ImageTk.PhotoImage(Image.open("Data/" + str(self.dataList[int(self.SampleList.curselection()[0])][1]) + ".jpg").resize((250,250),Image.ANTIALIAS))
self.pic4.configure(image = self.img4)
#Unlock textbox ready for next sample
self.valInput.configure(state='normal')
I was hoping that disabling the textbox and re-enabling it afterwards would work, but it doesn't. I wanted to use buttons, but they have insisted that it be typed to increase speed
I am new to python and psychopy, however I have vast experience in programming and in designing experiments (using Matlab and EPrime). I am running an RSVP (rapid visual serial presentation) experiment with displays a different visual stimuli every X ms (X is an experimental variable, can be from 100 ms to 1000 ms). As this is a physiological experiment, I need to send triggers over the parallel port exactly on stimulus onset. I test the sync between triggers and visual onset using an oscilloscope and photosensor. However, when I send my trigger before or after the win.flip(), even with the window waitBlanking=False parameter then I still get a difference between the onset of the stimuli and the onset of the code.
Attached is my code:
im=[]
for pic in picnames:
im.append(visual.ImageStim(myWin,image=pic,pos=[0,0],autoLog=True))
myWin.flip() # to get to the next vertical blank
while tm < and t < len(codes):
im[tm].draw()
parallel.setData(codes[t]) # before
myWin.flip()
#parallel.setData(codes[t]) # after
ttime.append(myClock.getTime())
core.wait(0.01)
parallel.setData(0)
dur=(myClock.getTime()-ttime[t])*1000
while dur < stimDur-frameDurAvg+1:
dur=(myClock.getTime()-ttime[t])*1000
t=t+1
tm=tm+1
myWin.flip()
How can I sync my stimulus onset to the trigger? I'm not sure if this is a graphics card issue (I'm using a LCD ACER screen with the onboard Intel graphics card). Many thanks,
Shani
win.flip() waits for next monitor update. This means that the next line after win.flip() is executed almost exactly when the monitor begins drawing the frame. That's where you want to send your trigger. The line just before win.flip() is potentially almost one frame earlier, e.g. 16.7 ms on a 60Hz monitor so your trigger would arrive too early.
There are two almost identical ways to do it. Let's start with the most explicit:
for i in range(10):
win.flip()
# On the first flip
if i == 0:
parallel.setData(255)
core.wait(0.01)
parallel.setData(0)
... so the signal is sent just after the image has been pushed to the monitor.
The slightly more timing-accurate way to do it will save you like 0.01 ms (plus minus an order of magnitude). Somewhere early in the script define
def sendTrigger(code):
parallel.setData(code)
core.wait(0.01)
parallel.setData(0)
Then do
win.callOnFlip(sendTrigger, code=255)
for i in range(10):
win.flip()
This will call the function just after the first flip, before psychopy does a bit of housecleaning. So the function could have been called win.callOnNextFlip since it's only executed on the first following flip.
Again, this difference in timing is so miniscule compared to other factors that this is not really a question of a performance but rather of style preferences.
There is a hidden timing variable that is usually ignored - the monitor input lag, and I think this is the reason for the delay. Put simply, the monitor needs some time to display the image even after getting the input from the graphics card. This delay has nothing to do with the refresh rate (how many times the screen switches buffer), or the response time of the monitor.
In my monitor, I find a delay of 23ms when I send a trigger with callOnFlip(). How I correct it is: floor(23/16.667) = 1, and 23%16.667 = 6.333. So I call the callOnFlip on the second frame, wait 6.3 ms and trigger the port. This works. I haven't tried with WaitBlanking=True, which waits for the blanking start from the graphics card, as that gives me some more time to prepare the next buffer already. However, I think that even with WaitBlanking=True the effect will be there. (More after testing!)
Best,
Suddha
There is at least one routine that you can use to normalized the trigger delay to your screen refreshing rate. I just tested it with a photosensor cell and I went from a mean delay of 13 milliseconds (sd = 3.5 ms) between the trigger and the stimulus display, to a mean delay of 4.8 milliseconds (sd = 3.1 ms).
The procedure is the following :
Compute the mean duration between two displays. Say your screen has a refreshing rate of 85.05 (this is my case). This means that there is mean duration of 1000/85.05 = 11.76 milliseconds between two refreshes.
Just after you called win.flip(), wait for this averaged delay before you send your trigger : core.wait(0.01176).
This will not ensure that all your delays now equal zero, since you cannot master the synchronization between the win.flip() command and the current state of your screen, but it will center the delay around zero. At least, it did for me.
So the code could be updated as following :
refr_rate = 85.05
mean_delay_ms = (1000 / refr_rate)
mean_delay_sec = mean_delay_ms / 1000 # Psychopy needs timing values in seconds
def send_trigger(port, value):
core.wait(mean_delay_sec)
parallel.setData(value)
core.wait(0.001)
parallel.setData(0)
[...]
stimulus.draw()
win.flip()
send_trigger(port, value)
[...]
I am looking at trying to pause something in C++. Specifically a bullet you shoot in a space invaders game. Each time you press the UP key it fires a shot, I have been trying to find a way to pause it for a number of seconds before being able to fire again.
I've tried Sleep(); but it freezes the entire game rather than pauses the ability to press UP again.
Firing code
if (CInput::getInstance()->getIfKeyDownEvent(DIK_UP))
{
g_pGame->AddSprite(new CMissile(m_fX, m_fY+0.5*m_fH, 0.09, 0.9, 2));
}
Try taking the current time and then adding your delay to it. Store that in your shooting object. The next time through your program loop, if the current time is less than the time stored in the object, ignore the UP arrow.
Here are two simple ways to manage this.
When you fire a bullet, take the current system time and add the delay you want to it. If the player attempts to fire again while the current time is less than the variable you set, nothing happens.
Or, when you fire a bullet, set a timer variable to the delay you want. Each update, subtract delta time from the timer. When the timer is <=0, the user can fire.
Typically when you want to deal with real time seconds, you need something called delta time. Due to the inconsistency with frame rates, you need a way to measure real time. Typically you do this by counting the amount of time elapsed between frames. Here's an example of this implementation:
Source
int timeSinceStart = glutGet(GLUT_ELAPSED_TIME);
int oldTimeSinceStart = 0;
while( ... ) // main game loop
{
int timeSinceStart = glutGet(GLUT_ELAPSED_TIME);
int deltaTime = timeSinceStart - oldTimeSinceStart;
oldTimeSinceStart = timeSinceStart;
secondsSinceLastShot += deltaTime;
if (secondsSinceLastShot > shotTimer)
{
canShoot = true;
secondsSinceLastShot = 0;
}
if ( // press space or something )
{
canShoot = false;
// shoot
}
}
Note that this uses GLUT's implementation of a timer, but you need to implement that yourself (probably using clock()).
I've tried Sleep(); but it freezes the entire game rather than pauses the ability to press UP again.
Sleeping will freeze the thread, which is not what you want to do. However, sleep() is typically used in an implementation that contains delta time, usually sleep()ing for the amount of time elapsed between frames. For an example, see Lazy Foo's SDL tutorials
Ignore the fact that I linked to both OpenGL and SDL links, the principle is the same no matter the graphics library used.
I'm trying to see how recently an Event occurred (so that I can ignore the backlog of events that built up while a first event is being processed.) I see that events have a time attribute in milliseconds, but it doesn't line up with the system time I get from calling time.time(). Does anyone know how to convert between the two? Thanks!
Example
from Tkinter import Tk, Label
from time import time
def print_fn(event): print event.time, time()
app = Tk()
label = Label(app, text='Click Here!')
label.bind('<Button>', print_fn)
label.pack()
app.mainloop()
Output
1430467703 1360190553.41
The event.time attribute would be useful for determining the time between two Tkinter events.
event.time
This attribute is set to an integer which has no absolute meaning, but
is incremented every millisecond. This allows your application to
determine, for example, the length of time between two mouse clicks.
time.time
Return the time in seconds since the epoch as a floating point number.
Note that even though the time is always returned as a floating point
number, not all systems provide time with a better precision than 1
second. While this function normally returns non-decreasing values, it
can return a lower value than a previous call if the system clock has
been set back between the two calls.
To measure how much time has elapsed we generally use time.time or time.clock like this:
start = time.clock()
somefunction()
elapsed = time.clock() - start
You wouldn't have to use event.time at all.
More info about this can be found here: link