3D rotation of a element in RaphaelJS - raphael

I am hoping you might be able to help me determine if the following animation is allowed in raphael.js. I am trying to have an element fly off the page, the idea is to have it appear to fall/fly off in 3D. I am able to tell the element to rotate X degrees and slide off but its lacking the look of the element being independent of hte background. What I would like to do is be able to tell raphael to rotate the top corner "out" as it falls giving the illusion of it falling out of view as picture falling off of a wall. Is this even possible or does Raphael only operate in two dimensional space?

Raphael only deals with 2D space. To implement a 3D flip you have to fake it. Thankfully Raphael implements Scale(sx,sy,x,y) as a transform op, so you can scale about an origin to fake a 3D 'flip' rotation.
For example:
Raphael.el.flipXTransform = function (parentBbox) {
var x = this.getBBox().x;
var width = this.getBBox().width;
parentBbox = parentBbox || { width:width, x: x};
var parentWidth = parentBbox.width;
var parentX = parentBbox.x;
var originX = parentX - x + parentWidth / 2;
return 's-1,1,' + originX + ',0';
};
Raphael.el.flipX = function (duration, easing, parentBbox) {
duration = duration || 500;
easing = easing || 'easeInOut';
var scale = this.flipXTransform(parentBbox);
this.animate({ transform: '...' + scale }, duration, easing);
};
Here's a fiddle example for you to play with. The downside is this doesn't convey a perspective like a true 3D rotate does.

Related

Ball rolling and turning effect with SpriteKit

I have created a ball node, and applied the texture images from my 3d model. I have captured totally 6 images, 3 images (with having 120deg) for rolling around x axis, and other 3 images for rolling around y axis. I want sprite kit to simulate it with following code below.When i apply impulse, it starts sliding instead rolling and when it collides to sides, then it starts turning but again not rolling. Normally, depending on the impulse on the ball, it should turn and roll together sometimes. The effect on "8 ball pool game" balls can be an example which i want to get a result.
var ball = SKSpriteNode()
var textureAtlas = SKTextureAtlas()
var textureArray = [SKTexture]()
override func didMove(to view: SKView) {
textureAtlas = SKTextureAtlas(named: "white")
for i in 0... textureAtlas.textureNames.count {
let name = "ball_\(i).png"
textureArray.append(SKTexture(imageNamed: name))
}
ball = SKSpriteNode(imageNamed: textureAtlas.textureNames[0])
ball.size = CGSize(width: ballRadius*2, height: ballRadius*2)
ball.position = CGPoint(x: -ballRadius/2-20, y: -ballRadius-20)
ball.zPosition = 0
ball.physicsBody = SKPhysicsBody(circleOfRadius: ballRadius)
ball.physicsBody?.isDynamic = true
ball.physicsBody?.restitution = 0.3
ball.physicsBody?.linearDamping = 0
ball.physicsBody?.allowsRotation = true
addChild(ball)}
You need to apply angular impulse to get it to rotate
node.physicsBody!.applyAngularImpulse(1.0)

Isometric Collision - 'Diamond' shape detection

My project uses an isometric perspective for the time being I am showing the co-ordinates in grid-format above them for debugging. However, when it comes to collision/grid-locking of the player, I have an issue.
Due to the nature of sprite drawing, my maths is creating some issues with the 'triangular' corner empty areas of the textures. I think that the issue is something like below (blue is what I think is the way my tiles are being detected, whereas the red is how they ideally should be detected for accurate roaming movement on the tiles:
As you can see, the boolean that checks the tile I am stood on (which takes the pixel central to the player's feet, the player will later be a car and take a pixel based on the direction of movement) is returning false and denying movement in several scenarios, as well as letting the player move in some places that shouldn't be allowed.
I think that it's because the cutoff areas of each texture are (I think) being considered part of the grid area, so when the player is in one of these corner areas it is not truly checking the correct tile, and so returning the wrong results.
The code I'm using for creating the grid is this:
int VisualComponent::TileConversion(Tile* tileToConvert, bool xOrY)
{
int X = (tileToConvert->x - tileToConvert->y) * 64; //change 64 to TILE_WIDTH_HALF
int Y = (tileToConvert->x + tileToConvert->y) * 25;
/*int X = (tileToConvert->x * 128 / 2) + (tileToConvert->y * 128 / 2) + 100;
int Y = (tileToConvert->y * 50 / 2) - (tileToConvert->x * 50 / 2) + 100;*/
if (xOrY)
{
return X;
}
else
{
return Y;
}
}
and the code for checking the player's movement is:
bool Clsentity::CheckMovementTile(int xpos, int ypos, ClsMapData* mapData) //check if the movement will end on a legitimate road tile UNOPTIMISED AS RUNS EVERY FRAME FOR EVERY TILE
{
int x = xpos + 7; //get the center bottom pixel as this is more suitable than the first on an iso grid (more realistic 'foot' placement)
int y = ypos + 45;
int mapX = (x / 64 + y / 25) / 2; //64 is TILE-WIDTH HALF and 25 is TILE HEIGHT
int mapY = (y / 25 - (x / 64)) / 2;
for (int i = 0; i < mapData->tilesList.size(); i++) //for each tile of the map
{
if (mapData->tilesList[i]->x == mapX && mapData->tilesList[i]->y == mapY) //if there is an existing tile that will be entered
{
if (mapData->tilesList[i]->movementTile)
{
HAPI->DebugText(std::to_string(mapX) + " is the x and the y is " + std::to_string(mapY));
return true;
}
}
}
return false;
}​
I'm a little stuck on progression until having this fixed in the game loop aspect of things. If anyone thinks they either know the issue from this or might be able to help it'd be great and I would appreciate it. For reference also, my tile textures are 128x64 pixels and the math behind drawing them to screen treats them as 128x50 (to cleanly link together).
Rather than writing specific routines for rendering and click mapping, seriously consider thinking of these as two views on the data, which can be transformed in terms of matrix transformations of a coordinate space. You can have two coordinate spaces - one is a nice rectangular grid that you use for positioning and logic. The other is the isometric view that you use for display and input.
If you're not familiar with linear algebra, it'll take a little bit to wrap your head around it, but once you do, it makes everything trivial.
So, how does that work? Your isometric view is merely a rotation of a bog standard grid view, right? Well, close. Isometric view also changes the dimensions if you're starting with a square grid. Anyhow: can we just do a simple coordinate transformation?
Logical coordinate system -> display system (e.g. for rendering)
Texture point => Rotate 45 degrees => Scale by sqrt(2) because a 45 degree rotation changes the dimension of the block by sqrt(1 * 1 + 1 * 1)
Display system -> logical coordinate system (e.g. for mapping clicks into logical space)
Click point => descale by sqrt(2) to unsquish => unrotate by 45 degrees
Why?
If you can do coordinate transformations, then you'd be dealing with a pretty bog-standard rectangular grid for everything else you write, which will make your any other logic MUCH simpler. Your calculations there won't involve computing angles or slopes. E.g. now your "can I move 'down'" logic is much simpler.
Let's say you have 64 x 64 tiles, for simplicity. Now transforming a screen space click to a logical tile is simply:
(int, int) whichTile(clickX, clickY) {
logicalX, logicalY = transform(clickX, clickY)
return (logicalX / 64, logicalY / 64)
}
You can do checks like see if x0,y0 and x1,y1 are on the same tile, in the logical space by someting as simple as:
bool isSameTile(x0, y0, x1, y1) {
return floor(x0/64) == floor(x1/64) && floor(y0/64) == floor(y1/64)
}
Everything gets much simpler once you define the transforms and work in the logical space.
http://en.wikipedia.org/wiki/Rotation_matrix
http://en.wikipedia.org/wiki/Scaling_%28geometry%29#Matrix_representation
http://www.alcove-games.com/advanced-tutorials/isometric-tile-picking/
If you don't want to deal with some matrix library, you can do the equivalent math pretty straightforwardly, but if you separate concerns of logic management from display / input through these transformations, I suspect you'll have a much easier time of it.

Removing body (balls) from physics engine

I've been trying to remove elements (balls) that have been added to the Physics engine, but I can't find a way to do it.
This is the code I'm using to add the molecules to the Physics Engine:
var numBodies = 15;
function _addMolecules() {
for (var i = 0; i < numBodies; i++) {
var radius = 20;
var molecule = new Surface({
size: [radius * 2, radius * 2],
properties: {
borderRadius: radius + 'px',
backgroundColor: '#'+(0x1000000+(Math.random())*0xffffff).toString(16).substr(1,6)
}
});
molecule.body = new Circle({
radius: radius,
mass: 2
});
this.pe.addBody(molecule.body);
this.molecules.push(molecule);
this.moleculeBodies.push(molecule.body);
molecule.state = new Modifier({origin: [0.5, 0.5]});
//** This is where I'm applying the gravity to the balls and also where I'm checking the position of each ball
molecule.state.transformFrom(addBodyTransform.bind(molecule.body));
this._add(molecule.state).add(molecule);
}
}
and on the addBodyTransform function I'm adding the gravity to the balls and checking their position, and for any that are outside the top part of the viewport I want to remove it completely (I'm only using walls on the left, right and bottom edges of the viewport).
function addBodyTransform() {
var pos;
for (var i = 0; i < thisObj.moleculeBodies.length; i++) {
pos = thisObj.moleculeBodies[i].getPosition();
if(pos[1]<(-windowY/2)){
//I tried this but it doesn't work
thisObj.pe.removeBody(thisObj.moleculeBodies[i]);
thisObj.moleculeBodies[i].render = function(){ return null; };
}
}
thisObj.gravity.applyForce(this);
return this.getTransform();
}
It doesn't work. I tried a couple of other things, but no luck. Whereas changing the position of the balls on the function above worked fine:
thisObj.moleculeBodies[i].setPosition([0, 0]);
Does anybody have any idea how to remove a body (a circle in this case)?
P.S.: thisObj is the variable I'm assign the "this" object to in the constructor function and thisObj.pe is the instance of the PhysicsEngine(). Hope that makes sense.
After some investigation, using the unminified source code and trying out different things, I realised that there was something weird going on in the library.
Having a look at the repository, I found out that the function _getBoundAgent is being used before it is defined, which matched with the error I was getting (you can check it here: https://travis-ci.org/Famous/physics). So it looks like it is a bug in the Famo.us source-code. Hopefully it will be fixed in the next release.
For the time being, I had to create a hack, which is basically detaching all agents (as well as gravity) from the balls that go outside the viewport and setting their (fixed) position far outside the viewport (about -2000px in both directions).
I know it is not the best approach (a dirty one indeed), but if you have the same problem and want to use it until they release a fix for that, here is what I did:
function addBodyTransform() {
var pos = this.body.getPosition();
//Check if balls are inside viewport
if(pos[1]<(-(windowY/2)-100)){
if(!this.removed){
//flagging ball so the code below is executed only once
this.removed = true;
//Set position (x and y) of the ball 2000px outside the viewport
this.body.setPosition([(-(windowX/2)-2000), (-(windowY/2)-2000)]);
}
return this.body.getTransform();
}else{
//Add gravity only if inside viewport
thisObj.gravity.applyForce(this.body);
return this.body.getTransform();
}
}
and on the _addMolecules function, I'm adding a "molecule.removed = false":
function _addMolecules() {
for (var i = 0; i < numBodies; i++) {
...
molecule.state = new Modifier({origin: [0.5, 0.5]});
//Flagging molecule so I know which ones are removed
molecule.removed = false;
molecule.state.transformFrom(addBodyTransform.bind(molecule));
this._add(molecule.state).add(molecule);
}
}
Again, I know it is not the best approach and I will be keen in hearing from someone with a better solution. :)

How do I find my mouse point in a scene using SceneKit?

I have set up a scene in SceneKit and have issued a hit-test to select an item. However, I want to be able to move that item along a plane in my scene. I continue to receive mouse drag events, but don't know how to transform those 2D coordinates into 3D coordinate in the scene.
My case is very simple. The camera is located at 0, 0, 50 and pointed at 0, 0, 0. I just want to drag my object along the z-plane with a z-value of 0.
The hit-test works like a charm, but how do I translate the mouse point from a drag event into a new position in the scene for the 3D object I am dragging?
You don't need to use invisible geometry — Scene Kit can do all the coordinate conversions you need without having to hit test invisible objects. Basically you need to do the same thing you would in a 2D drawing app for moving an object: find the offset between the mouseDown: location and the object position, then for each mouseMoved:, add that offset to the new mouse location to set the object's new position.
Here's an approach you could use...
Hit-test the initial click location as you're already doing. This gets you an SCNHitTestResult object identifying the node you want to move, right?
Check the worldCoordinates property of that hit test result. If the node you want to move is a child of the scene's rootNode, these is the vector you want for finding the offset. (Otherwise you'll need to convert it to the coordinate system of the parent of the node you want to move — see convertPosition:toNode: or convertPosition:fromNode:.)
You're going to need a reference depth for this point so you can compare mouseMoved: locations to it. Use projectPoint: to convert the vector you got in step 2 (a point in the 3D scene) back to screen space — this gets you a 3D vector whose x- and y-coordinates are a screen-space point and whose z-coordinate tells you the depth of that point relative to the clipping planes (0.0 is on the near plane, 1.0 is on the far plane). Hold onto this z-coordinate for use during mouseMoved:.
Subtract the position of the node you want to move from the mouse location vector you got in step 2. This gets you the offset of the mouse click from the object's position. Hold onto this vector — you'll need it until dragging ends.
On mouseMoved:, construct a new 3D vector from the screen coordinates of the new mouse location and the depth value you got in step 3. Then, convert this vector into scene coordinates using unprojectPoint: — this is the mouse location in your scene's 3D space (equivalent to the one you got from the hit test, but without needing to "hit" scene geometry).
Add the offset you got in step 3 to the new location you got in step 5 - this is the new position to move the node to. (Note: for live dragging to look right, you should make sure this position change isn't animated. By default the duration of the current SCNTransaction is zero, so you don't need to worry about this unless you've changed it already.)
(This is sort of off the top of my head, so you should probably double-check the relevant docs and headers. And you might be able to simplify this a bit with some math.)
As an experiment I implemented Mr Bishop's helpful answer. The drag doesn't quite work (the object - a chess piece - jumps off screen) because of differences in the coordinate magnitudes between the mouse click and the 3-D world. I've inserted log outputs here and there among the code.
I asked on the Apple forums if anyone knew the secret sauce to homogenize the coordinates but didn't get a decisive answer. One thing, I had made some experimental changes to Mr Bishop's method and the forum members advised me to return to his technique.
Despite my code's failings, I thought someone might find it a useful starting point. I suspect there are only one or two small problems with the code.
Note that the log of the world transform matrix of the object (chess piece) is not part of the process but one Apple forum member advised me that the matrix often offers a useful 'sanity check' - which indeed it did.
- (NSPoint)
viewPointForEvent: (NSEvent *) event_
{
NSPoint windowPoint = [event_ locationInWindow];
NSPoint viewPoint = [self.view convertPoint: windowPoint
fromView: nil];
return viewPoint;
}
- (SCNHitTestResult *)
hitTestResultForEvent: (NSEvent *) event_
{
NSPoint viewPoint = [self viewPointForEvent: event_];
CGPoint cgPoint = CGPointMake (viewPoint.x, viewPoint.y);
NSArray * points = [(SCNView *) self.view hitTest: cgPoint
options: #{}];
return points.firstObject;
}
- (void)
mouseDown: (NSEvent *) theEvent
{
SCNHitTestResult * result = [self hitTestResultForEvent: theEvent];
SCNVector3 clickWorldCoordinates = result.worldCoordinates;
log output: clickWorldCoordinates x 208.124578, y -12827.223365, z 3163.659073
SCNVector3 screenCoordinates = [(SCNView *) self.view projectPoint: clickWorldCoordinates];
log output: screenCoordinates x 245.128906, y 149.335938, z 0.985565
// save the z coordinate for use in mouseDragged
mouseDownClickOnObjectZCoordinate = screenCoordinates.z;
selectedPiece = result.node; // save selected piece for use in mouseDragged
SCNVector3 piecePosition = selectedPiece.position;
log output: piecePosition x -18.200000, y 6.483060, z 2.350000
offsetOfMouseClickFromPiece.x = clickWorldCoordinates.x - piecePosition.x;
offsetOfMouseClickFromPiece.y = clickWorldCoordinates.y - piecePosition.y;
offsetOfMouseClickFromPiece.z = clickWorldCoordinates.z - piecePosition.z;
log output: offsetOfMouseClickFromPiece x 226.324578, y -12833.706425, z 3161.309073
}
- (void)
mouseDragged: (NSEvent *) theEvent;
{
NSPoint viewClickPoint = [self viewPointForEvent: theEvent];
SCNVector3 clickCoordinates;
clickCoordinates.x = viewClickPoint.x;
clickCoordinates.y = viewClickPoint.y;
clickCoordinates.z = mouseDownClickOnObjectZCoordinate;
log output: clickCoordinates x 246.128906, y 0.000000, z 0.985565
log output: pieceWorldTransform:
m11 = 242.15889219510001, m12 = -0.000045609300002524833, m13 = -0.00000721691076126, m14 = 0,
m21 = 0.0000072168760805499971, m22 = -0.000039452697396149999, m23 = 242.15890446329999, m24 = 0,
m31 = -0.000045609300002524833, m32 = -242.15889219510001, m33 = -0.000039452676995750002, m34 = 0,
m41 = -4268.2349924762348, m42 = -12724.050221935429, m43 = 4852.6652710104272, m44 = 1)
SCNVector3 newPiecePosition;
newPiecePosition.x = offsetOfMouseClickFromPiece.x + clickCoordinates.x;
newPiecePosition.y = offsetOfMouseClickFromPiece.y + clickCoordinates.y;
newPiecePosition.z = offsetOfMouseClickFromPiece.z + clickCoordinates.z;
log output: newPiecePosition x 472.453484, y -12833.706425, z 3162.294639
selectedPiece.position = newPiecePosition;
}
I used the code written by Steve and with little modification it worked for me.
On mouseDown I save clickWorldCoordinates on a property called startClickWorldCoordinates.
On mouseDragged I calculate the selectedPiece position in this way:
SCNVector3 worldClickCoordinate = [(SCNView *) self.view unprojectPoint:clickCoordinates.x];
newPiecePosition.x = selectedPiece.position.x + worldClickCoordinate.x - startClickWorldCoordinates.x;
newPiecePosition.y = selectedPiece.position.y + worldClickCoordinate.y - startClickWorldCoordinates.y;
newPiecePosition.z = selectedPiece.position.z + worldClickCoordinate.z - startClickWorldCoordinates.z;
selectedPiece.position = newPiecePosition;
startClickWorldCoordinates = worldClickCoordinate;

leaflet.js calculating radius from center to borders

I'm switching from using google maps to leaflet.js. One thing I did in google maps and can't seem to find in leaflet.js is calculate the radius from the center of the map (i.e. search location) to the sides of the map. As you can zoom in and out the area that people are looking at can change significantly.
The code below showed the few lines I had in order to do that with google maps. Can somebody point me in the right direction regarding leaflet.js?
// viewport stores the recommended viewport for the returned result.
// (LatLngBounds)
viewportLatLngBounds = zip.get("location").geometry.viewport;
this.map.fitBounds(viewportLatLngBounds);
this.collection.latitude = viewportLatLngBounds.getCenter().lat();
this.collection.longitude = viewportLatLngBounds.getCenter().lng();
// calculate radius
// get distance..
// from (lat of NE corner), (lng of center)
// to (lat of center), (lng of center)
topCenterLatLng = new google.maps.LatLng(viewportLatLngBounds.getNorthEast().lat(), viewportLatLngBounds.getCenter().lng());
metersRadius = google.maps.geometry.spherical.computeDistanceBetween(viewportLatLngBounds.getCenter(), topCenterLatLng);
this.collection.radius = metersRadius / 1000;
this.collection.radiusUnits = "km";
for future reference:
getMapRadiusKM: function() {
var mapBoundNorthEast = map.getBounds().getNorthEast();
var mapDistance = mapBoundNorthEast.distanceTo(map.getCenter());
return mapDistance/1000;
},