I do not understand the SymPy Mechanics rolling disc example - sympy

In the example A rolling disc using Lagrange’s Method of the Sympy>Mechanics module it creates a contact point C between the disc and ground, and sets to 0 the velocity of C in the N reference frame, wich represents the floor.
C.set_vel(N, 0)
But, as I see it, the contact point C is moving as the disc rolls over the floor, so the velocity of C in N should be non null, and in fact it should be one of the outputs of the example. What am I missing?

Related

C++ dot product to get if one object is in front of another?

Im trying to use Vector2::Dot and Math::Acos to determine whether one gameobject is in front of another. With what I have, I'm able to get whether the first object's forward vector is at a perpendicular angle to second actor's forward vector:
float dotResult = Vector2::Dot(GetForwardDir(), secondActor->GetForward());
float angle = Math::Acos(dotResult);
if(angle <= minAngle)
//do something
Where GetForwardDir() is:
return Vector2(Math::Cos(mRotation), -Math::Sin(mRotation));
However this does not account for where the second actor is relative to the first actor. I don't think Im understanding something here about dotResult and the Acos angle.
How can I use this to get whether the first actor is in front of the second?
You probably want a cross product for this. The dot product will give you a scaled cos for the angle between two vectors. A cross product will give you the sin, which is more useful here.
Basically, make two vectors:
ActorA position + ActorA.Forward vector
ActorA position + (ActorB.position - ActorA.position)
Or, just
ActorA.forward X normalize(ActorB.Position - ActorA.position);
Since you're doing this in 2D, the result will be a scalar. If the sign of the result is positive, then ActorB is within 180º (+- 90º) of ActorA's forward vector, from ActorA's position.
Technically, you're calculating an orthogonal vector, the Z component of which will tell you if the target is in front of or behind the viewer, but the X and Y components of that vector are zero, because orthogonal.
Purists will say that the cross product does not exist in 2D, and they're technically correct, but this is how it's done.

fortran beginner - writing variable to output file

I am starting to work with a CFD fortran program, and want to update the variables that it writes to an output file.
I want to output several columns, I and J coordinates(IL and JL), Water Surface Elevation (SURFEL), Bottom Elevation of coordinate (BELV), Depth of Water (HP) and finally, and this is where I have the question, the Maximum Water Surface Elevation of the coordinate during the simulation (SURFELMAX). L refers to a specific I,J coordinate, LA is the last coordinate in the simulation
So far I have:
DO L=2,LA
SURFEL=BELV(L)+HP(L)
IF (SURFEL.GT.SURFELMAX)THEN
SURFELMAX=SURFEL
ELSE IF (SURFELMAX.GT.SURFEL) THEN
SURFELMAX=SURFELMAX
WRITE(10,200)IL(L),JL(L),SURFEL,SURFELMAX
ENDIF
ENDDO
Everything works ok other than the SURFELMAX, in which the highest recorded surface elevation that occurred in any coordinate in the whole domain is written for each coordinate, i.e. the column is filled with the same value, the highest experienced in the whole domain during the simulation.
Would I need to first allocate an array for SURFELMAX, and have SURFEL checked against it each time to see if it has increased? If so could somebody point me in the right direction for this?
If I understand the requirements correctly, then you want to calculate SURFELMAX before you start writing out. This could simply be:
SURFELMAX = MAXVAL(BELV(2:LA)+HP(2:LA))
WRITE(10,200) (IL(L), JL(L), BELV(L)+HP(L), SURFELMAX, L=2,LA)
(or even as a single line).
It appears I didn't understand correctly; I'll try again - keeping the above as a warning to others.
It seems that you do indeed want SURFELMAX(2:LA) where each element is the highest in a given cell to date.
do L=2, LA
SURFELMAX(L) = MAX(SURFELMAX(L), BELV(L)+HP(L)) ! Store the historical maximum
WRITE (10,200) IL(L), JL(L), BELV(L)+HP(L), SURFELMAX(L)
end do
where, initially, SURFELMAX has been set to a sufficiently small value. You could also explicitly calculate SURFEL if that is needed.
If this is time dependent, then you will have to define a 2-d array SURFELMAX of size (1:LA,1:T) (T = number of time steps, LA = number of active coordinates).
Then increment the time step (say, the iterator is called I_T) outside of the loop through the domain.
Finally assign the maximum value at each coordinate to the SURFELMAX(I_T,L)

Getting the velocity vector from position vectors

I looked at a bunch of similar questions, and I cannot seem to find one that particularly answers my question. I am coding a simple 3d game, and I am trying to allow the player to pick up and move entities around my map. I essentially want to get a velocity vector that will "push" the physics object a distance from the player's eyes, wherever they are looking. Here's an example of this being done in another game (the player is holding a chair entity in front of his eyes).
To do this, I find out the player's eye angles, then get the forward vector from the angles, then calculate the velocity of the object. Here is my working code:
void Player::PickupOtherEntity( Entity& HoldingEntity )
{
QAngle eyeAngles = this->GetPlayerEyeAngles();
Vector3 vecPos = this->GetEyePosition();
Vector3 vecDir = eyeAngles.Forward();
Vector3 holdingEntPos = HoldingEntity.GetLocation();
// update object by holding it a distance away
vecPos.x += vecDir.x * DISTANCE_TO_HOLD;
vecPos.y += vecDir.y * DISTANCE_TO_HOLD;
vecPos.z += vecDir.z * DISTANCE_TO_HOLD;
Vector3 vecVel = vecPos - holdingEntPos;
vecVel = vecVel.Scale(OBJECT_SPEED_TO_MOVE);
// set the entity's velocity as to "push" it to be in front of the player's eyes
// at a distance of DISTANCE_TO_HOLD away
HoldingEntity.SetVelocity(vecVel);
}
All that is great, but I want to convert my math so that I can apply an impulse. Instead of setting a completely new velocity to the object, I want to "add" some velocity to its existing velocity. So supposing I have its current velocity, what kind of math do I need to "add" velocity? This is essentially a game physics question. Thank you!
A very simple implementation could be like this:
velocity(t+delta) = velocity(t) + delta * acceleration(t)
acceleration(t) = force(t) / mass of the object
velocity, acceleration and force are vectors. t, delta and mass scalars.
This only works reasonably well for small and equally spaced deltas. What you are essentially trying to achieve with this is a simulation of bodies using classical mechanics.
An Impulse is technically F∆t for a constant F. Here we might want to assume a∆t instead because mass is irrelevant. If you want to animate an impulse you have to decide what the change in velocity should be and how long it needs to take. It gets complicated real fast.
To be honest an impulse isn't the correct thing to do. Instead it would be preferable to set a constant pick_up_velocity (people don't tend to pick things up using an impulse), and refresh the position each time the object rises up velocity.y, until it reaches the correct level:
while(entPos.y < holdingEntPos.y)
{
entPos.y += pickupVel.y;
//some sort of short delay
}
And as for floating in front of the player's eyes, set an EyeMovementEvent of some sort that also sends the correct change in position to any entity the player is holding.
And if I missed something and that's what you are already doing, remember that when humans apply an impulse, it is generally really high acceleration for a really short time, much less than a frame. You wouldn't see it in-game anyways.
basic Newtonian/D'Alembert physics dictate:
derivate(position)=velocity
derivate(velocity)=acceleration
and also backwards:
integrate(acceleration)=velocity
integrate(velocity)=position
so for your engine you can use:
rectangle summation instead of integration (numerical solution of integral). Define time constant dt [seconds] which is the interval between updates (timer or 1/fps). So the update code (must be periodically called every dt:
vx+=ax*dt;
vy+=ay*dt;
vz+=az*dt;
x+=vx*dt;
y+=vy*dt;
z+=vz*dt;
where:
a{x,y,z} [m/s^2] is actual acceleration (in your case direction vector scaled to a=Force/mass)
v{x,y,z} [m/s] is actual velocity
x,y,z [m] is actual position
These values have to be initialized a,v to zero and x,y,z to init position
all objects/players... have their own variables
full stop is done by v=0; a=0;
driving of objects is done only by change of a
in case of collision mirror v vector by collision normal
and maybe multiply by some k<1.0 (0.95 for example) to account energy loss on impact
You can add gravity or any other force field by adding g vector:
vx+=ax*dt+gx*dt;
vy+=ay*dt+gy*dt;
vz+=az*dt+gz*dt;
also You can add friction and anything else you need
PS. the same goes for angles just use angle/omega/epsilon/I instead of x/a/v/m
to be clear by angles I mean rotation (pitch,yaw,roll) around mass center

what is a shapefile's measure value?

I'm trying to write a GIS, and are using shapefiles from kortforsyningen.dk
I have the problem, that i cant find out what the m (mesure) value of a vertex is.
I know x value is east/west
y is north/south
z is the height, elevation
but m, whats that? In physics, it would be time or 4.th dimention, but none of those fit with the word "mesure"
The Documentation doesn't tell, first time the word is used, it just says "plus a m (mesure) value. (page 10)
EDIT:
when i wrote "The Documentation" i meant the shapefile documentation, this one
http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf
m seems to be any value that you can assign to a point. E.g You measure the temperature at spefic measure points. then x,y contains the geo coordinates, an m the temperature. Then there is the PointZ type whoch contains x,y,z,m: which i undrstand as a 3d point with an assigned measure, e.g temperature or airpressure, etc.

How do you judge the (real world) distance of an object in a picture?

I am building a recognition program in C++ and to make it more robust, I need to be able to find the distance of an object in an image.
Say I have an image that was taken 22.3 inches away of an 8.5 x 11 picture. The system correctly identifies that picture in a box with the dimensions 319 pixels by 409 pixels.
What is an effective way for relating the actual Height and width (AH and AW) and the pixel Height and width (PH and PW) to the distance (D)?
I am assuming that when I actually go to use the equation, PH and PW will be inversely proportional to D and AH and AW are constants (as the recognized object will always be an object where the user can indicate width and height).
I don't know if you changed your question at some point but my first answer it quite complicated for what you want. You probably can do something simpler.
1) Long and complicated solution (more general problems)
First you need the know the size of the object.
You can to look at computer vision algorithms. If you know the object (its dimensions and shape). Your main problem is the problem of pose estimation (that is find the position of the object relative the camera) from this you can find the distance. You can look at [1] [2] (for example, you can find other articles on it if you are interested) or search for POSIT, SoftPOSIT. You can formulate the problem as an optimization problem : find the pose in order to minimize the "difference" between the real image and the expected image (the projection of the object given the estimated pose). This difference is usually the sum of the (squared) distances between each image point Ni and the projection P(Mi) of the corresponding object (3D) point Mi for the current parameters.
From this you can extract the distance.
For this you need to calibrate you camera (roughly, find the relation between the pixel position and the viewing angle).
Now you may not want do code all of this for by yourself, you can use Computer Vision libs such as OpenCV, Gandalf [3] ...
Now you may want to do something more simple (and approximate). If you can find the image distance between two points at the same "depth" (Z) from the camera, you can relate the image distance d to the real distance D with : d = a D/Z (where a is a parameter of the camera related to the focal length, number of pixels that you can find using camera calibration)
2) Short solution (for you simple problem)
But here is the (simple, short) answer : if you picture in on a plane parallel to the "camera plane" (i.e. it is perfectly facing the camera) you can use :
PH = a AH / Z
PW = a AW / Z
where Z is the depth of the plane of the picture and a in an intrinsic parameter of the camera.
For reference the pinhole camera model relates image coordinated m=(u,v) to world coordinated M=(X,Y,Z) with :
m ~ K M
[u] [ au as u0 ] [X]
[v] ~ [ av v0 ] [Y]
[1] [ 1 ] [Z]
[u] = [ au as ] X/Z + u0
[v] [ av ] Y/Z + v0
where "~" means "proportional to" and K is the matrix of intrinsic parameters of the camera. You need to do camera calibration to find the K parameters. Here I assumed au=av=a and as=0.
You can recover the Z parameter from any of those equations (or take the average for both). Note that the Z parameter is not the distance from the object (which varies on the different points of the object) but the depth of the object (the distance between the camera plane and the object plane). but I guess that is what you want anyway.
[1] Linear N-Point Camera Pose Determination, Long Quan and Zhongdan Lan
[2] A Complete Linear 4-Point Algorithm for Camera Pose Determination, Lihong Zhi and Jianliang Tang
[3] http://gandalf-library.sourceforge.net/
If you know the size of the real-world object and the angle of view of the camera then assuming you know the horizontal angle of view alpha(*), the horizontal resolution of the image is xres, then the distance dw to an object in the middle of the image that is xp pixels wide in the image, and xw meters wide in the real world can be derived as follows (how is your trigonometry?):
# Distance in "pixel space" relates to dinstance in the real word
# (we take half of xres, xw and xp because we use the half angle of view):
(xp/2)/dp = (xw/2)/dw
dw = ((xw/2)/(xp/2))*dp = (xw/xp)*dp (1)
# we know xp and xw, we're looking for dw, so we need to calculate dp:
# we can do this because we know xres and alpha
# (remember, tangent = oposite/adjacent):
tan(alpha) = (xres/2)/dp
dp = (xres/2)/tan(alpha) (2)
# combine (1) and (2):
dw = ((xw/xp)*(xres/2))/tan(alpha)
# pretty print:
dw = (xw*xres)/(xp*2*tan(alpha))
(*) alpha = The angle between the camera axis and a line going through the leftmost point on the middle row of the image that is just visible.
Link to your variables:
dw = D, xw = AW, xp = PW
This may not be a complete answer but may push you in the right direction. Ever seen how NASA does it on those pictures from space? The way they have those tiny crosses all over the images. Thats how they get a fair idea about the deapth and size of the object as far as I know. The solution might be to have an object that you know the correct size and deapth of in the picture and then calculate the others' relative to that. Time for you to do some research. If thats the way NASA does it then it should be worth checking out.
I have got to say This is one of the most interesting questions i have seen for a long time on stackoverflow :D. I just noticed you have only two tags attached to this question. Adding something more in relation to images might help you better.