I am working on one-shot learning of gestures. Most of the gestures involve moving the left and the right hand and the hand joints are easily detectable using skeletal tracing library of Kinect SDK. I am facing the problem as to how to guess the start of the gesture and when it ends so that I can feed the coordinates of the hand joint trajectory to my algorithm that finally classifies the gesture?
There is no way you can detect the beginning of a unknown gesture within a learning engine. There must be some discrete action that tells the system that a gesture is about to be started for it to learn. Without this discrete action the system can not know what motion is the beginning of the gesture, v.s. a motion between, v.s. a motion moving towards the beginning, v.s. an arbitrary motion the engine should care nothing about.
There are a few discrete actions that might work, depending on your situation:
a keyboard or mouse action
a known gesture to signify a new gesture is to begin/end
use voice recognition to notify the engine that you are starting/ending
some action with a short countdown timer for the user to get to "position 1" of the gesture and begin when prompted.
have a single origin for all gestures - holding your hand there for a short period to signify beginning of learning action.
Without some form of a discrete action, the system just can not know what you want. It will always guess, and you will always run into a situation where the system guesses wrong.
For executing on a known gesture, your method depends on how you store the data and the complexity of the gesture. Here are two gesture libraries that you can review to see how they work:
http://kinecttoolbox.codeplex.com/
https://github.com/EvilClosetMonkey/Fizbin.Kinect.Gestures
They may also help give ideas of how you want to start/end gestures, based on how the gesture data is stored for each situation.
Related
I am currently working on a new RPG game using Pygame (my aim here is really to learn how to use object oriented programming). I started a few days ago and developed a movement system where the player clicks a location and the character sprite goes to that location and stops when it get's there by checking if the sprite 'collides' with the mouse position.
I quickly found however that this greatly limited the world size (to the app window size).
I started having a look into making a movement system where the background would move with respect to the player, hence providing the illusion of movement.
I managed to achieve this by creating a variable keeping track of my background map position. The map is much bigger than the app window. And each time I want my player to move I offset the background by the speed of the player in the opposite direction.
My next problem now is that I can't get my character to stop moving... because the character sprite never actually reaches the last position clicked by the mouse, since it is the background that is moving, not the character sprite.
I was thinking of spending some time coding in a variable that would keep track of how many displacements it would take the character sprite to reach the mouse clicked position if it was to move. Since the background moves at the character sprite's speed it would take as many displacement of the background in the x and y directions to center the clicked position on the background to the character sprite at the center of the screen.
It would be something like that:
If MOUSEBUTTON clicked:
NM = set number of moves needed to reach the clicked position based on character sprite distance to click and character sprite speed.
If NM != 0:
Move background image
Else:
pass
This would mean that when my background has moved enough for the character sprite to now be just over the area of the background that was originally clicked by the player, the movement would stop since NM == 0.
I guess that my question is: Does that sound like a good idea or will it be a nightmare to handle the movement of other sprites and collisions ? And are there better tools in Pygame to achieve this movement system ?
I could also maybe use a clock and work out how many seconds the movements would take.
I guess that ultimately the whole challenge is dealing with a fixed reference point and make everything move around it, both with respect to this fixed reference, but also to their own. e.g. If two other sprites move toward one another, and the character of the player also "moves" then the movement of the other two sprites will have to depend both on the position of the other sprite and also on the offset of the background caused by the movement of the player's character.
An interesting topic which has been frying my brain for a few nights !
Thank you for your suggestions !
You actually asking for an opinion on game design. The way I look at it, nothing is impossible so go ahead and try your coding. Also it would be wise to look around at similar projects scattered around the net. You may be able to pick up a lot of tips without re inventing the wheel. Here is a good place to start.
scrolling mini map
I would like to make a multi touch control for my camera
This camera should do:
zoom in/out when pinching
orbit when swiping
pan when swiping with 2 fingers.
Is there somebody who know some good examples/tutorials or give me some advice?
Thank you so much
The best example I found was the Strategy Game (Tower defense) sample that comes with the Unreal Engine. It demonstrates an independent camera system in C++ that responds to touch gestures.
As a simplified but very similar approach you may find also find useful my UE4TopDownCamera sample project for a top down camera with:
spread/pinch or mouse wheel up/down for zoom in/out (implemented as dollying)
swipe with one finger for panning
on/off functionality to lock on/follow main character or freely move
camera.
Please notice that the gestures are not exactly the ones you described, as my requirements were different.
I'll soon upload some full explanation and a video on github.
I am confused as to how applications handle the mouse input with Direct2D.
Consider the case:
Say I painted a ball on the window. If I move the cursor over the ball, it will change color.
Does this mean that the application have to poll the mouse and check if the cursor is over the ball pretty much every second of the application running time.
Doesn't that decrease performance when you have more items? Is there another way than what polling which in other words is a bunch of if-statements.
You need to perform the hit test when the mouse moves, or when the ball moves.
Efficient hit test / collision detection is a major concern in game development. If performance becomes an issue, collision detection is usually performed in two phases: A "broad phase" and a "narrow phase". One approach for the broad phase are quad trees (for two dimensions): The space (the window) is divided into "sections"; each ball and the mouse are assigned to a section according to their position. Only balls being in the same section as the mouse are candidates for the narrow phase. In the narrow phase, you just test the candidates that survived the broad phase.
Usually applications will listen for the WM_MOUSEMOVE event and update any affected objects in the event handler (or signal the objects of the change event in case it is an expensive operation). The render code shouldn't need to poll where the mouse cursor is.
Reading this article "Taking Advantage of High-Definition Mouse Movement" - http://msdn.microsoft.com/en-us/library/windows/desktop/ee418864(v=vs.100).aspx, I surmise that one should use raw input for more precise readings from input devices.
The article states that WM_MOUSEMOVE's primary disadvantage is that it is limited to the screen resolution.
Upon close inspection of the RAWMOUSE structure I see that lLastX and lLastY are long values and you get the delta via them.
To me it looks like WM_MOUSEMOVE and WM_INPUT is the same except with WM_INPUT you do not get acceleration (pointer ballistics) applied.
Are both WM_MOUSEMOVE and WM_INPUT limited to the screen resolution?
If so, what is the benefit of using WM_INPUT?
RAWMOUSE gives you logical coordinates for the mouse based on the mouse's native resolution.
That is, you see the actual movement of the mouse.
Windows will use the mouse speed and acceleration (ballistics) settings to update the cursor position. Because of course the two are not linked - the apparent movement of the mouse must be interpreted to generate a cursor movement else how can more than one mouse be supported?
If you wish to control a pointer, as far as I can tell there is no reason to duplicate the Windows mouse ballistics calculations. Just let windows do it. Therefore for controlling the pointer, you should just use WM_MOUSEMOVE. That is, unless you wish to disable the mouse acceleration settings in your application.
However, if you want to control the player's POV (point of view), or use the mouse to control an in-game object such as a spaceship flight yoke, then the RAWMOUSE data gives you the best possible access to the movement of the mouse, and you can implement your own algorithm to convert that into flight yoke/POV movement.
The main benefit and reason to use it is that that with rawInput you can use two mouses or more. Presently I write small game prototype which is designed to be played by two players with two mouses/mices - It is more complicated but It works and it is not bad because I do nod need to link external libs.
I am developing a project using the Native SDK for BlackBerry 10. I am using BlackBerry 10 Dev Alpha Simulator for testing purposes. I can't seem to simulate a pinch event, and did some searching just to find out that this is not implemented yet in the simulator.
So basically, I need a method to programatically create a pinch and run it when some other event is triggered. What is the easiest way to do this?
Edit:
I am not looking for language-agnostic solutions. I need an architectural implementation. How would one go on using gesture_pinch_t to create a pinch event (even with hardcoded parameters)?
I'm more involved with the WebWorks and AIR team at RIM, but off the top of my head a language agnostic solution would be something like the following:
You have some handler for the pinch event, which is able to process the data passed by the event (gesture_pinch_t)
Instead of using the pinch event to trigger the callback, you can simulate a pinch with some other obtainable event (perhaps a double tap or a test toggle button that you turn on and then all touch events become the start of a simulated pinch).
You then make the centroid property your starting coordinate, and then as you drag with your finger (or in this case, with your cursor in the simulator), you calculate the distance property by subtracting the current coordinate with the origin coordinate you made your centroid.
Again, I haven't delved into the NDK specifically, but this is the approach I would take with JavaScript or ActionScript and is quite do-able. I wish I could write a code snippet but hopefully this helps take you in the right direction.
Cheers!
Just to let you know that multiple touch gestures are now supported in the simulator. Just right click and drag to add a touch event, do it again to simulate more touch events, then left click to execute them at the same time.
Example of pinch gesture: