Timeline in choreograph going back to original position - pepper

I am trying to lift pepper's arms by using a timeline and then I physically move them with the animation mode to my desired position. Only one action and the arms move, however I cannot manage to leave the arms lifted. How should this be done?
Thanks for the help

You have the right intuition by checking the resources, but sadly they do not interfere with Pepper's autonomous movements. This is a known issue.
Nonetheless are several ways to achieve this:
Make a timeline box with the desired position as the first key frame, and running at 0 fps. You can set that here.
Make a timeline box with your movement, and with a behavior layer at the end that loops the end of your movement using a Go To box, targeting the desired key frame.
Disable autonomous movements. The robot will not automatically go back to a neutral resting position anymore.
Note that keeping the arms lifted leads to heating the shoulder motors fast.
When it gets close to overheating, the power in the shoulders is reduced, and so the arms may fall despite your command. Yes, Pepper gets physically tired like us!

Related

Rendering issue regarding imagery versus functionality

As I understand rendering textures in SDL2, everything is waiting behind the scenes and a texture appears after using the SDL_RenderPresent() function and vanishes with SDL_RenderClear(), which you use before advancing to the next frame.
I understand that as far as it goes for imagery, but what about functionality? I have two button textures linked to mouse events that I want to see and use at different times in different places. I've got them rendering during different enum states and each button does indeed appear and disappear on cue when the states change.
However, since both button textures are always "there" even while not being rendered, I can still mouse click on the invisible button that isn't being rendered at any given time. This doesn't seem to be an issue for mouse motion events, just mouse button events. How do I make a texture inactive as well as invisible when it's not being rendered?
I solved this one with some tinkering and a more experienced programmer named mbozzi's help to clue me in the right direction as to what was going on. The underlying issue was due to my completely decoupling the GUI logic and GUI rendering. Which is what we are always told to do: decouple everything, right? But I needed to couple the logic and rendering that I want to occur at the same time and place.
My event poll>>mouse input>>image rendering code was one giant loop. However, when I split that giant loop into separate mini event poll>>mouse input>>image rendering loops that each runs independently (but not concurrently, I just put them in their own different enum game states), that clears up the issue. So, if anyone has a similar problem with clicking invisible buttons, hopefully this will help.

Pygame: Character Centered Movement System (Diablo II like click-to-move)

I am currently working on a new RPG game using Pygame (my aim here is really to learn how to use object oriented programming). I started a few days ago and developed a movement system where the player clicks a location and the character sprite goes to that location and stops when it get's there by checking if the sprite 'collides' with the mouse position.
I quickly found however that this greatly limited the world size (to the app window size).
I started having a look into making a movement system where the background would move with respect to the player, hence providing the illusion of movement.
I managed to achieve this by creating a variable keeping track of my background map position. The map is much bigger than the app window. And each time I want my player to move I offset the background by the speed of the player in the opposite direction.
My next problem now is that I can't get my character to stop moving... because the character sprite never actually reaches the last position clicked by the mouse, since it is the background that is moving, not the character sprite.
I was thinking of spending some time coding in a variable that would keep track of how many displacements it would take the character sprite to reach the mouse clicked position if it was to move. Since the background moves at the character sprite's speed it would take as many displacement of the background in the x and y directions to center the clicked position on the background to the character sprite at the center of the screen.
It would be something like that:
If MOUSEBUTTON clicked:
NM = set number of moves needed to reach the clicked position based on character sprite distance to click and character sprite speed.
If NM != 0:
Move background image
Else:
pass
This would mean that when my background has moved enough for the character sprite to now be just over the area of the background that was originally clicked by the player, the movement would stop since NM == 0.
I guess that my question is: Does that sound like a good idea or will it be a nightmare to handle the movement of other sprites and collisions ? And are there better tools in Pygame to achieve this movement system ?
I could also maybe use a clock and work out how many seconds the movements would take.
I guess that ultimately the whole challenge is dealing with a fixed reference point and make everything move around it, both with respect to this fixed reference, but also to their own. e.g. If two other sprites move toward one another, and the character of the player also "moves" then the movement of the other two sprites will have to depend both on the position of the other sprite and also on the offset of the background caused by the movement of the player's character.
An interesting topic which has been frying my brain for a few nights !
Thank you for your suggestions !
You actually asking for an opinion on game design. The way I look at it, nothing is impossible so go ahead and try your coding. Also it would be wise to look around at similar projects scattered around the net. You may be able to pick up a lot of tips without re inventing the wheel. Here is a good place to start.
scrolling mini map

How to detect start and end of a gesture in kinect?

I am working on one-shot learning of gestures. Most of the gestures involve moving the left and the right hand and the hand joints are easily detectable using skeletal tracing library of Kinect SDK. I am facing the problem as to how to guess the start of the gesture and when it ends so that I can feed the coordinates of the hand joint trajectory to my algorithm that finally classifies the gesture?
There is no way you can detect the beginning of a unknown gesture within a learning engine. There must be some discrete action that tells the system that a gesture is about to be started for it to learn. Without this discrete action the system can not know what motion is the beginning of the gesture, v.s. a motion between, v.s. a motion moving towards the beginning, v.s. an arbitrary motion the engine should care nothing about.
There are a few discrete actions that might work, depending on your situation:
a keyboard or mouse action
a known gesture to signify a new gesture is to begin/end
use voice recognition to notify the engine that you are starting/ending
some action with a short countdown timer for the user to get to "position 1" of the gesture and begin when prompted.
have a single origin for all gestures - holding your hand there for a short period to signify beginning of learning action.
Without some form of a discrete action, the system just can not know what you want. It will always guess, and you will always run into a situation where the system guesses wrong.
For executing on a known gesture, your method depends on how you store the data and the complexity of the gesture. Here are two gesture libraries that you can review to see how they work:
http://kinecttoolbox.codeplex.com/
https://github.com/EvilClosetMonkey/Fizbin.Kinect.Gestures
They may also help give ideas of how you want to start/end gestures, based on how the gesture data is stored for each situation.

Are offscreen animations ignored by rendering and CPU?

Just wondering how Cocos manages the CPU cycle and graphics engine for CCSprites that are offscreen, including those in the middle of an animation. If you have many animated sprites going on and off the screen, I could check and stop each animation when it's off the screen then restart it when it is about to come back on, but I'm wondering if this is necessary?
Suppose you had a layer with a bunch of them and you make the layer invisible, but don't stop the sprite animations. Will they still use CPU time?
I just did a quick test (good question :) ), in a game where i can slide the screen over a large map that contains images of soldiers performing an 'idle' animation. They continue running when off-screen (I tacked a CCCallFunc in a sequence in a repeat forever, to a simple selector that logs).
I suspect they would also run when the object is not visible. It kind of makes sense, especially for animations. If you look at my use case, if the animation were stopped, it could cause a cognitive disconnect if the user slided the soldier in and out of view, especially when the soldier is doing a walk on the map - he could actually walk-in the view without the user having done any interaction with the screen.

How to change touch priority on overlapping sprites

Is there any way to change the touch priority for cocos2d iOS sprites? What I have are multiple cards on the screen and they are arrayed in an arc, just like it would when you hold them in your hands. So in this setup, they overlap, and I need to recognize on which card the touch was made. I could measure the coordinates of each vertex of cards and determine the visible area of a card and then check if the touch was made inside that area (couldn't I?) but I thought there would be an easier way to deal with this, say changing the touch priority? Which means that the card closest to the screen would have the highest priority and it'll keep decreasing along the way into the background, so that even if the touch was made on 2 sprites at once (the above and below one), it would be registered only on the sprite with higher priority.
Reading on the internet only revealed ways to change the priority for a sprite and layer so that it defines whether the touch was made on the layer or the sprite, but that's not what I want.
As far as I know, by default you get exactly that behavior, the sprites closer (on the z ax) to you have priority. However, I think they pass down the event to the ones behind them as well. So, what i think you need to do is to eat the event when it gets to any of your sprites. To do that, just return NO when overwriting the "touchBegin" method. Hope it helps.