I am making a game in Cocos2d. I have a ball that will be shot at a flat surface (the top of the screen) how can I make it so the ball will travel, hit the surface, then reflect the angle and travel that direction? Does that make sense? Please tell me if it doesn't, and I will clarify. Thanks!
EDIT:
Here's an illustration of what I want
Here
You could build the game using box2d (in cocos2d). Then you will have that "effect" for free.
Once you launch a ball at an angle, say 50 degrees, add (cos(50)*speed) to his X position, and (sin(50)*speed) to his Y position.
When you detect the ball's y position has reached the surface's y position, just change the angle to -50.
But you must know that it only works if you want a reflection angle on a top surface, where it hits the top surface and bounces down.
Related
In my project (C++/UE4), I have a lever mesh sticking out of the floor. Holding the left mouse button on this lever and moving the mouse initiates a dragging operation. This dragging operation is responsible for calculating the 2D delta mouse movements, and utilizing this data to rotate the lever *in local space*, which can only rotate on a single axis (negative or positive, but still only one axis).
But what if, instead of being in front of the lever, I'm actually behind it? What if I'm on one of its sides? What if the lever is actually sticking out of a wall instead of the floor?... How do I make it so that mouse movements actually rotate the lever appropriate to the angle at which it is viewed from, regardless of the lever's orientation?
To further explain myself...
Here's a list of scenarios, and how I'd like the mouse to control them:
When the lever's on the FLOOR and you are in FRONT of it:
If you move the mouse UP (-Y), it should rotate away from the camera
If you move the mouse DOWN (+Y), it should rotate toward the camera
When the lever's on the FLOOR and you are BEHIND it:
If you move the mouse UP (-Y), it should rotate away from the camera
(which is the opposite world-space direction of when you are in front of it)
If you move the mouse DOWN (+Y), it should rotate toward the camera
(which is the opposite world-space direction of when you are in front of it)
When the lever's on the FLOOR and you are BESIDE it:
If you move the mouse LEFT (-X), it should rotate to the LEFT of the camera
(which is the opposite direction of when you are on the other side of it)
If you move the mouse RIGHT (+X), it should rotate to the RIGHT of the camera
(which is the opposite direction of when you are on the other side of it)
When the lever's on a WALL and you are in FRONT of it:
If you move the mouse UP, it should rotate UP (toward the sky)
If you move the mouse DOWN, it should rotate DOWN (toward the floor)
When the lever's on the WALL and you are BESIDE it:
Same as when it's on the wall and you are in front of it
PLEASE NOTE if it helps at all, that UE4 does have built-in 2D/3D Vector math functions, as well as easy ways to project and deproject coordinates to/from the 3D world or 2D screen. Because of this, I always know the exact world-space and screen-space coordinates of the mouse location, the lever's pivot (base) location, and the lever's handle (top) location, as well as the amount (delta) that the mouse has moved each frame.
Get the pivot of the lever (the point around it rotates), and project it to the screen coordinates. Then when you first click you store the coordinates of the click.
Now when you need to know which way to rotate you compute the dot product between the vector pivot to first click and the vector pivot to current location (you should normalize the vectors before the dot product). This gives you cos(angle) that the mouse moved and you can use it (take arccos(value) to get the angle) to move the lever in 3d. It will be a bit wonky since the angle on screen is not the same as the projected angle, but it's easier to control this way (if you move the mouse 90 degrees the lever moves 90 degrees even if they don't align properly). Play with the setup and see what works best for your project.
Another way to do it is this: When you first click you store the point of the end of the lever (or even better the point where you clicked on the lever) in 3d space. You use the camera projection plane to move the point in 3d (you can use camera up vector, after you make it orthogonal to the camera view direction, then take the view direction cross up vector to get the right direction). You apply the mouse delta movements to the point, then project it into the rotation plane and move the lever to align the point to the projected one (the math is similar to the one above, just use the 3d points instead of the screen projections).
Caution: this doesn't work well if the camera is very close to the plane of rotation since it's not always clear if the lever should move forward or backwards.
I'm not an expert on unreal-engine4(just learning myself) but all these are basic vector math and should be supported well. Check out dot product and cross product on wikipedia since they are super useful for these kind of tricks.
Here's one approach:
When the user clicks on the lever, Suppose there is a a plane through the pivot of the lever whose normal is the same as the direction from the camera to the pivot.. Calculate the intersection point of the cursor's ray and that plane.
FVector rayOrigin;
FVector rayDirection;
FVector cameraPosition;
FVector leverPivotPosition;
FVector planeNormal = (leverPivotPosition-cameraPosition).GetSafeNormal(0.0001f);
float t = ((leverPivotPosition - rayOrigin) | planeNormal) / (planeNormal | rayDirection);
FVector planeHitPosition = rayOrigin + rayDirection * t;
Do a scalar projection of that onto the local top/bottom axis of the lever. Let's assume it's the local up/down axis:
FVector leverLocalUpDirectionNormalized;
float scalarPosition = planeHitPosition | leverLocalUpDirectionNormalized;
Then, in each other frame where the lever is held down, calculate the new scalarPosition for that frame. As scalarPosition increases between frames, the lever should rotate such that it moves towards the up side of the lever. As it decreases between frames, the lever should rotate towards the the down side of the lever.
As shown in the image below.
The user moves the ball by changing x,y,z coordinates which correspond to right,left, up, down, near, far movements respectively. But when we change the camera from position A to position B things look weird. Right doesn't look right any more, that because the ball still moves in previous coordinate frame shown by previous z in the image. How can I make the ball move in such a away that changing camera doesn't affect they way its displacement looks.
simple example: if we place the camera such that it looking from positive X axis, the change in the values of z coordinate now, will look like right and left movements. However in reality changing z should be near and far always.
Thought i will answer it here:
I solved it by simply multiplying the cam model view matrix to the balls coordinates.
Here is the code:
glGetDoublev( GL_MODELVIEW_MATRIX, cam );
matsumX=cam[0]*tx+cam[1]*ty+cam[2]*tz+cam[3];
matsumY=cam[4]*tx+cam[5]*ty+cam[6]*tz+cam[7];
matsumZ=cam[8]*tx+cam[9]*ty+cam[10]*tz+cam[11];
where tx,ty,tz are ball's original coordinates and matsumX, matsumY, matsumZ are the new coordinates which change according the camera.
I am having an circle in the center of the world. I add some balls to the world in the form of b2Body. Now i want to move or throw the ball to the center of the screen. The effect should be like the balls are colliding with the circle.
The ball are positions randomly, so they can be at any were on the screen and the need to travel to center of the screen to the circle
Can any one tell me how to do this because i have no idea to move the b2Body object.
I want blue circle to attract red circles. Or in other words i want red circles to move to blue circle.
Finally got the solution of my problem. I use the concept of radial gravity.
http://www.vellios.com/2010/06/06/box2d-and-radial-gravity-code/
In this case, I am not sure about this way but you can try this way. I guess you are adding b2body by touching on the screen. Now i don't know what is the type of your B2body.
There are two ways to move the b2body.
check this link:http://www.cocos2d-iphone.org/forum/topic/21620
From this link,
I am guessing, in your game ,body is b2static body,What you can do is Move your sprite to center of the screen, with respect to that change the position of the corresponding body in tick method.
And you need to stop moving the sprite when it hits the center ball so stop moving corresponding sprite when it hits the center ball.
This may be the possible way if i understood your question.
I'm writing a screensaver with a bouncing ball (x and y, does not bounce in Z) in C++ using OpenGL. When this ball touches the edges of the screen, a small patch of damage will appear on the ball. (When the ball is damaged enough, it will explode.) Finding the part of the ball to damage is the easy part when the ball isn't rotating.
The algorithm I decided for this is that I'm keeping the position left most, right most, top most and bottom most vertex. For every collision, I obviously need to know which screen edge it hit. Before the ball could roll, when it touches a screen edge, if it hit the left screen edge, I know the left-most vertex is the point on the ball that took a hit. From there, I get all vertices that are within d distance from that point. I don't need the actual vertex that was hit, just the point on the ball's surface.
Doing this, I don't need to read all vertices, translate them by the x,y position of the ball and see which are off-screen. Doing this would solve all my problems but would be slow as hell.
Currently, the ball's rotation is controlled by pitch, yaw and roll. The problem is, what point on the ball's outer surface has touched the edge of the screen given my yaw, pitch and roll angles? I've looked into keeping an up, right and direction vector but I'm totally new to this and as someone might notice, totally lost. I've read the rotation matrix article on Wikipedia several times and still drawing a blank. If I got rid of one rotation angle it would be much simpler but I would prefer not to.
If you have your rotation angles then you can recreate the model view matrix in your code. With that matrix you can apply the rotation to the vertices of the mesh (simply by multiplication) and then find the left most (or whatever) vertex as you did before.
This article explains how to construct the rotation matrix with the angles you have.
I am new in cocos2d..I have a sprite that is traveling in some direction, when this sprite hits a wall by an angle, I want it to change the direction and continue in that path..?? can any one give a sample code??
Have a look at this code: iPhone Cocos2d Bouncing Ball.