I'm switching from using google maps to leaflet.js. One thing I did in google maps and can't seem to find in leaflet.js is calculate the radius from the center of the map (i.e. search location) to the sides of the map. As you can zoom in and out the area that people are looking at can change significantly.
The code below showed the few lines I had in order to do that with google maps. Can somebody point me in the right direction regarding leaflet.js?
// viewport stores the recommended viewport for the returned result.
// (LatLngBounds)
viewportLatLngBounds = zip.get("location").geometry.viewport;
this.map.fitBounds(viewportLatLngBounds);
this.collection.latitude = viewportLatLngBounds.getCenter().lat();
this.collection.longitude = viewportLatLngBounds.getCenter().lng();
// calculate radius
// get distance..
// from (lat of NE corner), (lng of center)
// to (lat of center), (lng of center)
topCenterLatLng = new google.maps.LatLng(viewportLatLngBounds.getNorthEast().lat(), viewportLatLngBounds.getCenter().lng());
metersRadius = google.maps.geometry.spherical.computeDistanceBetween(viewportLatLngBounds.getCenter(), topCenterLatLng);
this.collection.radius = metersRadius / 1000;
this.collection.radiusUnits = "km";
for future reference:
getMapRadiusKM: function() {
var mapBoundNorthEast = map.getBounds().getNorthEast();
var mapDistance = mapBoundNorthEast.distanceTo(map.getCenter());
return mapDistance/1000;
},
Related
I'm making a sniper shooter arcade style game in Gamemaker Studio 2 and I want the position of targets outside of the viewport to be pointed to by chevrons that move along the circumference of the scope when it moves. I am using trig techniques to determine the coordinates but the chevron is jumping around and doesn't seem to be pointing to the target. I have the code broken into two: the code to determine the coordinates in the step event of the enemies class (the objects that will be pointed to) and a draw event in the same class. Additionally, when I try to rotate the chevron so it also points to the enemy, it doesn't draw at all.
Here's the coordinate algorithm and the code to draw the chevrons, respectively
//determine the angle the target makes with the player
delta_x = abs(ObjectPlayer.x - x); //x axis displacement
delta_y = abs(ObjectPlayer.y - y); //y axis displacement
angle = arctan2(delta_y,delta_x); //angle in radians
angle *= 180/pi //angle in radians
//Determine the direction based on the larger dimension and
largest_distance = max(x,y);
plusOrMinus = (largest_distance == x)?
sign(ObjectPlayer.x-x) : sign(ObjectPlayer.y-y);
//define the chevron coordinates
chevron_x = ObjectPlayer.x + plusOrMinus*(cos(angle) + 20);
chevron_y = ObjectPlayer.y + plusOrMinus*(sign(angle) + 20);
The drawing code
if(object_exists(ObjectEnemy)){
draw_text(ObjectPlayer.x, ObjectPlayer.y-10,string(angle));
draw_sprite(Spr_Chevron,-1,chevron_x,chevron_y);
//sSpr_Chevron.image_angle = angle;
}
Your current code is slightly more complex that it needs to be for this, if you want to draw chevrons pointing towards all enemies, you might as well do that on spot in Draw. And use degree-based functions if you're going to need degrees for drawing anyway
var px = ObjectPlayer.x;
var py = ObjectPlayer.y;
with (ObjectEnemy) {
var angle = point_direction(px, py, x, y);
var chevron_x = px + lengthdir_x(20, angle);
var chevron_y = py + lengthdir_y(20, angle);
draw_sprite_ext(Spr_Chevron, -1, chevron_x, chevron_y, 1, 1, angle, c_white, 1);
}
(also see: an almost-decade old blog post of mine about doing this while clamping to screen edges instead)
Specific problems with your existing code are:
Using a single-axis plusOrMinus with two axes
Adding 20 to sine/cosine instead of multiplying them by it
Trying to apply an angle to sSpr_Chevron (?) instead of using draw_sprite_ext to draw a rotated sprite.
Calculating largest_distance based on executing instance's X/Y instead of delta X/Y.
I'm trying to solve an problem where I cannot find the Relative Offset of a Point inside a Box that exists inside of a space that can be arbitrarily rotated and translated.
I know the WorldSpace Location of the Box (and its 4 Corners, the Coordinates on the Image are Relative) as well as its Rotation. These can be arbitrary (its actually a 3D Trigger Volume within a game, but we are only concerned with it in a 2D plane from top down).
Looking at it Aligned to an Axis the Red Point Relative position would be
0.25, 0.25
If the Box was to be Rotated arbitrarily I cannot seem to figure out how to maintain that given we sample the same Point (its World Location will have changed) its Relative Position doesnt change even though the World Rotation of the Box has.
For reference, the Red Point represents an Object that exists in the scene that the Box is encompassing.
bool UPGMapWidget::GetMapMarkerRelativePosition(UPGMapMarkerComponent* MapMarker, FVector2D& OutPosition)
{
bool bResult = false;
if (MapMarker)
{
const FVector MapMarkerLocation = MapMarker->GetOwner()->GetActorLocation();
float RelativeX = FMath::GetMappedRangeValueClamped(
-FVector2D(FMath::Min(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerBottomRightLocation().X), FMath::Max(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerBottomRightLocation().X)),
FVector2D(0.f, 1.f),
MapMarkerLocation.X
);
float RelativeY = FMath::GetMappedRangeValueClamped(
-FVector2D(FMath::Min(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomRightLocation().Y), FMath::Max(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomRightLocation().Y)),
FVector2D(0.f, 1.f),
MapMarkerLocation.Y
);
OutPosition.X = FMath::Abs(RelativeX);
OutPosition.Y = FMath::Abs(RelativeY);
bResult = true;
}
return bResult;
}
Currently, you can see with the above code that im only using the Top Left and Bottom Right corners of the Box to try and calculate the offset, I know this is not a sufficient solution as doing this does not allow for Rotation (Id need to use the other 2 corners as well) however I cannot for the life of me work out what I need to do to reach the solution.
FMath::GetMappedRangeValueClamped
This converts one range onto another. (20 - 50) becomes (0 - 1) for example.
Any assistance/advice on how to approach this problem would be much appreciated.
Thanks.
UPDATE
#Voo's comment helped me realize that the solution was much simpler than anticipated.
By knowing the Location of 3 of the Corners of the Box, I'm able to find the points on the 2 lines these 3 Locations create, then simply mapping those points into a 0-1 range gives the appropriate value regardless of how the Box is Translated.
bool UPGMapWidget::GetMapMarkerRelativePosition(UPGMapMarkerComponent* MapMarker, FVector2D& OutPosition)
{
bool bResult = false;
if (MapMarker && GetMapVolume())
{
const FVector MapMarkerLocation = MapMarker->GetOwner()->GetActorLocation();
const FVector TopLeftLocation = GetMapVolume()->GetCornerTopLeftLocation();
const FVector TopRightLocation = GetMapVolume()->GetCornerTopRightLocation();
const FVector BottomLeftLocation = GetMapVolume()->GetCornerBottomLeftLocation();
FVector XPlane = FMath::ClosestPointOnLine(TopLeftLocation, TopRightLocation, MapMarkerLocation);
FVector YPlane = FMath::ClosestPointOnLine(TopLeftLocation, BottomLeftLocation, MapMarkerLocation);
// Convert the X axis into a 0-1 range.
float RelativeX = FMath::GetMappedRangeValueUnclamped(
FVector2D(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerTopRightLocation().X),
FVector2D(0.f, 1.f),
XPlane.X
);
// Convert the Y axis into a 0-1 range.
float RelativeY = FMath::GetMappedRangeValueUnclamped(
FVector2D(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomLeftLocation().Y),
FVector2D(0.f, 1.f),
YPlane.Y
);
OutPosition.X = RelativeX;
OutPosition.Y = RelativeY;
bResult = true;
}
return bResult;
}
The above code is the amended code from the original question with the correct solution.
assume the origin is at (x0, y0), the other three are at (x_x_axis, y_x_axis), (x_y_axis, y_y_axis), (x1, y1), the object is at (x_obj, y_obj)
do these operations to all five points:
(1)translate all five points by (-x0, -y0), to make the origin moved to (0, 0) (after that (x_x_axis, y_x_axis) is moved to (x_x_axis - x0, y_x_axis - y0));
(2)rotate all five points around (0, 0) by -arctan((y_x_axis - y0)/(x_x_axis - x0)), to make the (x_x_axis - x0, y_x_axis - y0) moved to x_axis;
(3)assume the new coordinates are (0, 0), (x_x_axis', 0), (0, y_y_axis'), (x_x_axis', y_y_axis'), (x_obj', y_obj'), then the object's zero-one coordinate is (x_obj'/x_x_axis', y_obj'/y_y_axis');
rotate formula:(x_new, y_new)=(x_old * cos(theta) - y_old * sin(theta), x_old * sin(theta) + y_old * cos(theta))
Update:
Note:
If you use the distance method, you have to take care of the sign of the coordinate if the object might go out of the scene in the future;
If there will be other transformations on the scene in the future (like symmetry transformation if you have mirror magic in the game, or transvection transformation if you have shockwaves, heatwaves or gravitational waves in the game), then the distance method no longer applies and you still have to reverse all the transformations your scene has in order to get the object's coordinate.
There's many answers to this problem, but I'm not sure that they all work with XTK, such as seeing multiple answers for this in Three.JS, but of course XTK and Three.JS don't have the same API obviously. Using a ray and Matrix seemed very similar to many other solutions for other frameworks, but I'm still not grasping a possible solution here. For now just finding the coordinates X, Y, and Z and recording them into Console.Log is fine, later I was hoping to create a caption/tooltip to display the information, but there is other ways to display it also. But can someone at least tell me if this is possible to use a ray to collide with the objects? I'm not sure how collision works in XTK with meshes or any other files. Any hints right now would be great!!
Here is my function to unproject in xtk. Please tell me if you see mistakes. Now with the resulting point and the camera position I should be able to find my intersections. To make the computation faster for the following step, I'll call it in a pick event and so I'll only have to try the intersections with a given object. If I've time i'll also try with testing the bounding boxes.
Nota bene : the last lines are not required, I could work on the ray instead of the point.
X.camera3D.prototype.unproject = function (x,y) {
// get the 4x4 model-view matrix
var mvMatrix = this._view;
// create the 4x4 projection matrix from the flatten gl version
var pMatrix = new X.matrix(4,4);
for (var i=0 ; i<16 ; i++) {
pMatrix.setValueAt(i - 4*Math.floor(i/4), Math.floor(i/4), this._perspective[i]);
}
// compute the product and inverse it
var mvpMatrxix = pMatrix.multiply(mwMatrix); /** Edit : wrong product corrected **/
var inverse_mvpMatrix = mvpMatrxix.getInverse();
if (!goog.isDefAndNotNull(inverse_mvpMatrix)) throw new Error("Could not inverse the transformation matrix.");
// check if x & y are map in [-1,1] interval (required for the computations)
if (x<-1 || x>1 || y<-1 || y>1) throw new Error("Invalid x or y coordinate, it must be between -1 and 1");
// fill the 4x1 normalized (in [-1,1]⁴) vector of the point of the screen in word camera world's basis
var point4f = new X.matrix(4,1);
point4f.setValueAt(0, 0, x);
point4f.setValueAt(1, 0, y);
point4f.setValueAt(2, 0, -1.0); // 2*?-1, with ?=0 for near plan and ?=1 for far plan
point4f.setValueAt(3, 0, 1.0); // homogeneous coordinate arbitrary set at 1
// compute the picked ray in the world's basis in homogeneous coordinates
var ray4f = inverse_mvpMatrix.multiply(point4f);
if (ray4f.getValueAt(3,0)==0) throw new Error("Ray is not valid.");
// return in not-homogeneous coordinates to compute the 3D direction vector
var point3f = new X.matrix(3,1);
point3f.setValueAt(0, 0, ray4f.getValueAt(0, 0) / ray4f.getValueAt(3, 0) );
point3f.setValueAt(1, 0, ray4f.getValueAt(1, 0) / ray4f.getValueAt(3, 0) );
point3f.setValueAt(2, 0, ray4f.getValueAt(2, 0) / ray4f.getValueAt(3, 0) );
return point3f;
};
Edit
Here, in my repo, you can find functions in camera3D.js and renderer3D.js for efficient 3D picking in xtk.
right now this is not easily possible. I guess you could grab the view matrix of the camera to calculate the position. If you do so, it would be great to bring it back into XTK as built-in functionality!
Currently, only object picking is possible like this (r is a X.renderer3D):
/**
* Picks an object at a position defined by display coordinates. If
* X.renderer3D.config['PICKING_ENABLED'] is FALSE, this function always returns
* -1.
*
* #param {!number} x The X-value of the display coordinates.
* #param {!number} y The Y-value of the display coordinates.
* #return {number} The ID of the found X.object or -1 if no X.object was found.
*/
var pick = r.pick($X, $Y);
I am hoping you might be able to help me determine if the following animation is allowed in raphael.js. I am trying to have an element fly off the page, the idea is to have it appear to fall/fly off in 3D. I am able to tell the element to rotate X degrees and slide off but its lacking the look of the element being independent of hte background. What I would like to do is be able to tell raphael to rotate the top corner "out" as it falls giving the illusion of it falling out of view as picture falling off of a wall. Is this even possible or does Raphael only operate in two dimensional space?
Raphael only deals with 2D space. To implement a 3D flip you have to fake it. Thankfully Raphael implements Scale(sx,sy,x,y) as a transform op, so you can scale about an origin to fake a 3D 'flip' rotation.
For example:
Raphael.el.flipXTransform = function (parentBbox) {
var x = this.getBBox().x;
var width = this.getBBox().width;
parentBbox = parentBbox || { width:width, x: x};
var parentWidth = parentBbox.width;
var parentX = parentBbox.x;
var originX = parentX - x + parentWidth / 2;
return 's-1,1,' + originX + ',0';
};
Raphael.el.flipX = function (duration, easing, parentBbox) {
duration = duration || 500;
easing = easing || 'easeInOut';
var scale = this.flipXTransform(parentBbox);
this.animate({ transform: '...' + scale }, duration, easing);
};
Here's a fiddle example for you to play with. The downside is this doesn't convey a perspective like a true 3D rotate does.
How does a 3D model handled unit wise ?
When i have a random model that i want to fit in my view port i dunno if it is too big or not, if i need to translate it to be in the middle...
I think a 3d object might have it's own origine.
You need to find a bounding volume, a shape that encloses all the object's vertices, for your object that is easier to work with than the object itself. Spheres are often used for this. Either the artist can define the sphere as part of the model information or you can work it out at run time. Calculating the optimal sphere is very hard, but you can get a good approximation using the following:
determine the min and max value of each point's x, y and z
for each vertex
min_x = min (min_x, vertex.x)
max_x = max (max_x, vertex.x)
min_y = min (min_y, vertex.y)
max_y = max (max_y, vertex.y)
min_z = min (min_z, vertex.z)
max_z = max (max_z, vertex.z)
sphere centre = (max_x + min_x) / 2, (max_y + min_y) / 2, (max_z + min_z) / 2
sphere radius = distance from centre to (max_x, max_y, max_z)
Using this sphere, determine the a world position that allows the sphere to be viewed in full - simple geometry will determine this.
Sorry, your question is very unclear. I suppose you want to center a 3D model to a viewport. You can achieve this by calculating the model's bounding box. To do this, traverse all polygons and get the minimum/maximum X/Y/Z coordinates. The bounding box given by the points (min_x,min_y,min_z) and (max_x,max_y,max_z) will contain the whole model. Now you can center the model by looking at the center of this box. With some further calculations (depending on your FOV) you can also get the left/right/upper/lower borders inside your viewport.
"so i tried to scale it down"
The best thing to do in this situation is not to transform your model at all! Leave it be. What you want to change is your camera.
First calculate the bounding box of your model somewhere in 3D space.
Next calculate the radius of it by taking the max( aabb.max.x-aabb.min.x, aabb.max.y-aabb.min.y, aabb.max.z-aabb.min.z ). It's crude but it gets the job done.
To center the object in the viewport place the camera at the object position. If Y is your forward axis subtract the radius from Y. If Z is the forward axis then subtract radius from it instead. Subtract a fudge factor to get you past the pesky near plane so your model doesn't clip out. I use quaternions in my engine with a nice lookat() method. So call lookat() and pass in the center of the bounding box. Voila! You're object is centered in the viewport regardless of where it is in the world.
This always places the camera axis aligned so you might want to get fancy and transform the camera into model space instead, subtract off the radius, then lookat() the center again. Then you're always looking at the back of the model. The key is always the lookat().
Here's some example code from my engine. It checks to see if we're trying to frame a chunk of static terrain, if so look down from a height, or a light or a static mesh. A visual is anything that draws in the scene and there are dozens of different types. A Visual::Instance is a copy of the visual, or where to draw it.
void EnvironmentView::frameSelected(){
if( m_tSelection.toInstance() ){
Visual::Instance& I = m_tSelection.toInstance().cast();
Visual* pVisual = I.toVisual();
if( pVisual->isa( StaticTerrain::classid )){
toEditorCamera().toL2W().setPosition( pt3( 0, 0, 50000 ));
toEditorCamera().lookat( pt3( 0 ));
}else if( I.toFlags()->bIsLight ){
Visual::LightInstance& L = static_cast<Visual::LightInstance&>( I );
qst3& L2W = L.toL2W();
const sphere s( L2W.toPosition(), L2W.toScale() );
const f32 y =-(s.toCenter()+s.toRadius()).y();
const f32 z = (s.toCenter()+s.toRadius()).y();
qst3& camL2W = toEditorCamera().toL2W();
camL2W.setPosition(s.toCenter()+pt3( 0, y, z ));//45 deg above
toEditorCamera().lookat( s.toCenter() );
}else{
Mesh::handle hMesh = pVisual->getMesh();
if( hMesh ){
qst3& L2W = m_tSelection.toInstance()->toL2W();
vec4x4 M;
L2W.getMatrix( M );
aabb3 b0 = hMesh->toBounds();
b0.min = M * b0.min;
b0.max = M * b0.max;
aabb3 b1;
b1 += b0.min;
b1 += b0.max;
const sphere s( b1.toSphere() );
const f32 y =-(s.toCenter()+s.toRadius()*2.5f).y();
const f32 z = (s.toCenter()+s.toRadius()*2.5f).y();
qst3& camL2W = toEditorCamera().toL2W();
camL2W.setPosition( L2W.toPosition()+pt3( 0, y, z ));//45 deg above
toEditorCamera().lookat( b1.toOrigin() );
}
}
}
}