Physics [UE4]: Implement a "maximum compression" for vehicle-suspension - c++

For a given vehicle, I implemented a suspension system on four wheels.
The system is based on Hooke's Law.
The Problem: The vehicle should not be able to touch the ground. When driving in a spherical container (inside), the suspension gets compressed up to 100%, making the vehicle chassis touch the underground, which leads to unwanted collisions that throw the vehicle around.
Despite that may being a realistical behaviour, our game aims for an arcade-feeling, so I am looking for a formula to implement a maximum compression, so that the vehicle chassis can't come closer to the underground than X percent of the suspension size at any given moment, without actually simulating a physical contact between the two rigid bodys. Thus, I need to apply a fake force to the suspensions.
My current approach:
If the vehicle chassis would in fact touch the suspension base (Sorry, I don't know the proper word to describe this. I mean, when the suspension is at maximum compression), a force equal in magnitude and opposite in direction relative to the force pushing onto the suspension would be applied to the vehicle chassis, forcing it to stop moving downwards.
Therefore, I receive my vehicles world velocity V.
To get the downwards-velocity, I get the DotProduct of the velocity and the BodyUpVector.
float DownForceMagnitude = DotProduct(VelocityAtSuspension, BodyUpVector);
FVector DownForce = DownForceMagnitude * BodyUpVector;
FVector CounterForce = -DownForce * WeightOnSuspension;
Okay, this pseudo-code works somewhat fine on even underground, when the vehicle lands on a plane after a jump. Driving on a increasing slope however (like driving on the inside-walls of a sphere), makes the suspension reach maximum compression anyway, so apparently my approach is not correct.
I am now wondering what the cause is. My weight calculation only is simulated by VehicleWeight / 4, since the Unreal Engine 4 has no functionality to receive weight at a given location. I am no physics-pro, so forgive me if this is easy to calculate. Could that be the issue?
I do not need a physically 100% plausible solution, I just need a solution that works, and sufficiently stops the downwards motion of my vehicle chassis.
Any help is appreciated.
Greetings,

I had this problem with a futuristic magnetic hovercraft.
I solved it by reducing the force by ln depending on suspensions extension levels like so:
y = ln(ln(x+e))
where:
x = Suspension extension lvl in % (-> 0 being fully compressed)
y = the factor that you multiply the force with
e = eulers number
Here a graphic to help what it will be like:
https://ggbm.at/gmGEsAzE
ln is a very slow growing function thats why it works so great for this.
You probably want to clamp the values (maybe between 0 and 100 idk exactly how your code behaves and how u want this "break" to behave)
Tailor the function to your needs, I just wanted to suggest u use the ln like I did to solve this Problem.
I added e to x first to make it go through 0,0 if u want to make it stop earlier just subtract from x before using ln.
Also notice depending on when/how you calculate / update your suspension this (and any function applied to the force based on the extension of suspension levels) may not work under some circumstances or at all.

Related

Linear Interpolation and Object Collision

I have a physics engine that uses AABB testing to detect object collisions and an animation system that does not use linear interpolation. Because of this, my collisions act erratically at times, especially at high speeds. Here is a glaringly obvious problem in my system...
For the sake of demonstration, assume a frame in our animation system lasts 1 second and we are given the following scenario at frame 0.
At frame 1, the collision of the objects will not bet detected, because c1 will have traveled past c2 on the next draw.
Although I'm not using it, I have a bit of a grasp on how linear interpolation works because I have used linear extrapolation in this project in a different context. I'm wondering if linear interpolation will solve the problems I'm experiencing, or if I will need other methods as well.
There is a part of me that is confused about how linear interpolation is used in the context of animation. The idea is that we can achieve smooth animation at low frame rates. In the above scenario, we cannot simply just set c1 to be centered at x=3 in frame 1. In reality, they would have collided somewhere between frame 0 and frame 1. Does linear interpolation automatically take care of this and allow for precise AABB testing? If not, what will it solve and what other methods should I look into to achieve smooth and precise collision detection and animation?
The phenomenon you are experiencing is called tunnelling, and is a problem inherent to discrete collision detection architectures. You are correct in feeling that linear interpolation may have something to do with the solution as it can allow you to, within a margin of error (usually), predict the path of an object between frames, but this is just one piece of a much larger solution. The terminology I've seen associated with these types of solutions is "Continuous Collision Detection". The topic is large and gets quite complex, and there are books that discuss it, such as Real Time Collision Detection and other online resources.
So to answer your question: no, linear interpolation on its own won't solve your problems*. Unless you're only dealing with circles or spheres.
What to Start Thinking About
The way the solutions look and behave are dependant on your design decisions and are generally large. So just to point in the direction of the solution, the fundamental idea of continuous collision detection is to figure out: How far between the early frame and the later frame does the collision happen, and in what position and rotation are the two objects at this point. Then you must calculate the configuration the objects will be in at the later frame time in response to this. Things get very interesting addressing these problems for anything other than circles in two dimensions.
I haven't implemented this but I've seen described a solution where you march the two candidates forward between the frames, advancing their position with linear interpolation and their orientation with spherical linear interpolation and checking with discrete algorithms whether they're intersecting (Gilbert-Johnson-Keerthi Algorithm). From here you continue to apply discrete algorithms to get the smallest penetration depth (Expanding Polytope Algorithm) and pass that and the remaining time between the frames, along to a solver to get how the objects look at your later frame time. This doesn't give an analytic answer but I don't have knowledge of an analytic answer for generalized 2 or 3D cases.
If you don't want to go down this path, your best weapon in the fight against complexity is assumptions: If you can assume your high velocity objects can be represented as a point things get easier, if you can assume the orientation of the objects doesn't matter (circles, spheres) things get easier, and it keeps going and going. The topic is beyond interesting and I'm still on the path of learning it, but it has provided some of the most satisfying moments in my programming period. I hope these ideas get you on that path as well.
Edit: Since you specified you're working on a billiard game.
First we'll check whether discrete or continuous is needed
Is any amount of tunnelling acceptable in this game? Not in billiards
no.
What is the speed at which we will see tunnelling? Using a 0.0285m
radius for the ball (standard American) and a 0.01s physics step, we
get 2.85m/s as the minimum speed that collisions start giving bad
response. I'm not familiar with the speed of billiard balls but that
number feels too low.
So just checking on every frame if two of the balls are intersecting is not enough, but we don't need to go completely continuous. If we use interpolation to subdivide each frame we can increase the velocity needed to create incorrect behaviour: With 2 subdivisions we get 5.7m/s, which is still low; 3 subdivisions gives us 8.55m/s, which seems reasonable; and 4 gives us 11.4m/s which feels higher than I imagine billiard balls are moving. So how do we accomplish this?
Discrete Collisions with Frame Subdivisions using Linear Interpolation
Using subdivisions is expensive so it's worth putting time into candidate detection to use it only where needed. This is another problem with a bunch of fun solutions, and unfortunately out of scope of the question.
So you have two candidate circles which will very probably collide between the current frame and the next frame. So in pseudo code the algorithm looks like:
dt = 0.01
subdivisions = 4
circle1.next_position = circle1.position + (circle1.velocity * dt)
circle2.next_position = circle2.position + (circle2.velocity * dt)
for i from 0 to subdivisions:
temp_c1.position = interpolate(circle1.position, circle1.next_position, (i + 1) / subdivisions)
temp_c2.position = interpolate(circle2.position, circle2.next_position, (i + 1) / subdivisions)
if intersecting(temp_c1, temp_c2):
intersection confirmed
no intersection
Where the interpolate signature is interpolate(start, end, alpha)
So here you have interpolation being used to "move" the circles along the path they would take between the current and the next frame. On a confirmed intersection you can get the penetration depth and pass the delta time (dt / subdivisions), the two circles, the penetration depth and the collision points along to a resolution step that determines how they should respond to the collision.

Q-learning to learn minesweeping behavior

I am attempting to use Q-learning to learn minesweeping behavior on a discreet version of Mat Buckland's smart sweepers, the original available here http://www.ai-junkie.com/ann/evolved/nnt1.html, for an assignment. The assignment limits us to 50 iterations of 2000 moves on a grid that is effectively 40x40, with the mines resetting and the agent being spawned in a random location each iteration.
I've attempted performing q learning with penalties for moving, rewards for sweeping mines and penalties for not hitting a mine. The sweeper agent seems unable to learn how to sweep mines effectively within the 50 iterations because it learns that going to specific cell is good, but after a the mine is gone it is no longer rewarded, but penalized for going to that cell with the movement cost
I wanted to attempt providing rewards only when all the mines were cleared in an attempt to make the environment static as there would only be a state of not all mines collected, or all mines collected, but am struggling to implement this due to the agent having only 2000 moves per iteration and being able to backtrack, it never manages to sweep all the mines in an iteration within the limit with or without rewards for collecting mines.
Another idea I had was to have an effectively new Q matrix for each mine, so once a mine is collected, the sweeper transitions to that matrix and operates off that where the current mine is excluded from consideration.
Are there any better approaches that I can take with this, or perhaps more practical tweaks to my own approach that I can try?
A more explicit explanation of the rules:
The map edges wrap around, so moving off the right edge of the map will cause the bot to appear on the left edge etc.
The sweeper bot can move up down, left or right from any map tile.
When the bot collides with a mine, the mine is considered swept and then removed.
The aim is for the bot to learn to sweep all mines on the map from any starting position.
Given that the sweeper can always see the nearest mine, this should be pretty easy. From your question I assume your only problem is finding a good reward function and representation for your agent state.
Defining a state
Absolute positions are rarely useful in a random environment, especially if the environment is infinite like in your example (since the bot can drive over the borders and respawn at the other side). This means that the size of the environment isn't needed for the agent to operate (we will actually need it to simulate the infinite space, tho).
A reward function calculates its return value based on the current state of the agent compared to its previous state. But how do we define a state? Lets see what we actually need in order to operate the agent like we want it to.
The position of the agent.
The position of the nearest mine.
That is all we need. Now I said erlier that absolute positions are bad. This is because it makes the Q table (you call it Q matrix) static and very fragile to randomness. So let's try to completely eliminate abosulte positions from the reward function and replace them with relative positions. Luckily, this is very simple in your case: instead of using the absolute positions, we use the relative position between the nearest mine and the agent.
Now we don't deal with coordinates anymore, but vectors. Lets calculate the vector between our points: v = pos_mine - pos_agent. This vector gives us two very important pieces of information:
the direction in which the nearst mine is, and
the distance to the nearest mine.
And these are all we need to make our agent operational. Therefore, an agent state can be defined as
State: Direction x Distance
of which distance is a floating point value and direction either a float that describes the angle or a normalized vector.
Defining a reward function
Given our newly defined state, the only thing we care about in our reward function is the distance. Since all we want is to move the agent towards mines, the distance is all that matters. Here are a few guesses how the reward function could work:
If the agent sweeps a mine (distance == 0), return a huge reward (ex. 100).
If the agent moves towards a mine (distance is shrinking), return a neutral (or small) reward (ex. 0).
If the agent moves away from a mine (distance is increasing), retuan a negative reward (ex. -1).
Theoretically, since we penaltize moving away from a mine, we don't even need rule 1 here.
Conclusion
The only thing left is determining a good learning rate and discount so that your agent performs well after 50 iterations. But, given the simplicity of the environment, this shouldn't even matter that much. Experiment.

What the math behind such animation trajectories?

What's the math behind something like this? C++ perspective.
More examples on this MSDN page here.
UPDATE: Was asked for a more concrete question. What's the math/animation theory for Penner's tweens^? How do you come up with those formulas? What are the math principles they are based on?
Me and math, we are not BFFs! I'm working on a multi-FLOAT value animator for a UI thing I'm writing and I was wondering what's the math from a native C++ programmer's point of view for generating such a trajectory.
Googled and found code but I'm also looking for a bit of theory from a programming perspective... not just code or pure math. I can whip the code I need together from what I found online but I'd like to understand it in the process. Like this site that allows one to experiment with an easing function generator.
I could also use the Windows Animation Manager (and I might if things get bloody), but that operates on a single float. And just animating RGB requires animating each FLOAT by itself. It leads to huge code-bloat... very bad.
If anyone has some hints, I would very much appreciate it. I'm looking mainly for theory from a programming perspective. The end goal is to write a bunch of different animation algorithms that can animate a set of FLOATs from their initial values to their target values in a period of time or speed and such.
The plan is not just to get the code written, but also to understand what goes on behind it. And then maybe get creative with this animations... unless these prove to be some rigid standard math functions.
So think of the requirements for a tweening function.
The function should represent a valid smooth motion between two positions/states. For those who haven't read the relevant section of the book, this means that f should be a continuous and differentiable function such that f(0) == 0 and f(1) == 1; actual motions are constructed using this as a primitive.
"Ease" (in the animation tweening sense) means "derivative is zero"; this gives the effect that the motion starts and/or ends with zero velocity (i.e. a standstill). So "ease-in" means f'(0) == 0 and "ease-out" means f'(1) == 0.
Everything else is based on aesthetic considerations.
Cubic curves (e.g. Bezier/Hermite splines) are popular partly because they let you control both the position and tangent(speed) at both ends of the curve, but also because they are close to the natural shape that a flexible beam adopts if you constrain its position it at a few points. The cubic shape minimises the internal stress of a flexed beam. (Unsurprisingly, these wooden beams are known to boat designers and other drafters as "splines", for this is where we get the word.)
Historically, hand-drawn cartoon animators have always specified their tweens by feel, based on experience. Key animators draw a chart (called a "timing chart"; look this up on your favourite image search engine) on the side of their key drawings, which tell the inbetweeners how the intermediate cels should be timed.
Camera motion (pan, zoom, rotate), however, were a different matter. Layout/animation artists specified the start and the end of the motion (specified using a field chart), the number of frames over which the motion would happen, instructions on easing and anything else the layout/animation team felt important (e.g. if you had to "linger").
The actual motions needed to be calculated; the audience would notice if one frame of a rotation was out even by a couple of hundredths of a degree. Doing these calculations was part of the job of the camera department.
There's a wonderful book called "Basic Animation Stand Techniques" by Brian Salt which dates from back in the days of physical animation cameras, and describes in some detail the sort of thing they had to do, and to what extent. I recommend it if you're at all interested in this stuff.
Math is math is math.
A good tutorial on Riemann Sum will demonstrate the concept.
In fundamental programming, you have a math equation that generates a Y value (height) for a given X (time). Periodically, like once a second for example, you plug in a new X (time) value and get the height back.
The more often you evaluate this function, the better the resolution (this is where the diagrams of a Riemann sum and calculus come in). The best you will get is an approximation to the curve that looks like stair steps.
In embedded systems, there is not a lot of resources to evaluate a function like this very frequently. The curve can be approximated using line segments. The more line segments, the better the approximation (improves accuracy). So one method is to break up the curve into line segments. For a given x, use the appropriate linear formula for the line. Evaluation of a line usually takes less resources than evaluating a higher degree equation.
Your curves are usually generated from equations of Physics. So not only do you need to improve on Math, you should also improve on Physics.
Otherwise you can search the web for libraries that handle trajectories.
As we traverse closer to the hardware, a timer can be used to call a method that evaluates the trajectory function for the given X. The timer helps produce a more accurate time value.
Search the web for "curve fit algorithm", "Bresenham algorithm", "graphics collision detection algorithms"

Simplest way to simulate basic diffusion over a 3D matrix?

I'm currently writing a program that will simulate in very basic terms the diffusion and pressure of a gas in a 3D volume with boundaries throughout - Think for example an ant cave.
The effects I want to achieve:
1. Gas diffuses throughout the environment over time, respecting walls.
2. I'd like to measure pressure, or the compression of the gas, per grid point. The effect of this should be that if a room is opened the gas will diffuse out of the opening in a speed that reflects the pressure difference.
My problem is that I lack the knowledge to fully understand theoretical math equations, and to be honest I'm really not looking for an accurate simulation. I'd just want to achieve some of the prominent effects of the physics at play. I'm not interested in fluid dynamics (For example the simulation of smoke).
I'll be writing this program in OpenCL but happy to take any form of code examples, be it C or pseudo code.
I'm thinking I should pass in 3 3D arrays - One for the gas, one that defines the walls (eg 1 at xyz = wall), and one to store the pressure.
I'm currently assuming checking for the wall is easy enough. One simply checks each neighbour cell for it and account for the cell if its not a wall:
For each grid point,
is wallmatrix[x+1] a wall?
[diffusion math here]
is wallmatrix[x-1] a wall?
[diffusion math here]
is wallmatrix[y+1] a wall?
[diffusion math here]
etc...
But I'm just not sure what to do with the diffusion math, nor how I would include pressure in all this?
Diffusion is one of the easiest things to simulate because it's self smoothing.
For example, you could run your simulation in terms of constant time steps and keeping track of the individual particle positions, and at each time step move each particle a fixed (small) distance, but in a random direction.
There are other ways too, for example, you can do a grid based approach, where change the number of particles in each grid location.
A slight issue with your question is where you say, "diffuse out of the opening in a speed that reflects the pressure difference". Diffusion doesn't really do this, since diffusion is just the random motion of particles. I think, though, that even straight diffusion might look satisfying to you here, since the gas will diffuse out of an opening, and it will look faster. Really what will be happening though is that it will be diffusing out at the same speed as everywhere else, it's just that nothing will be diffusing back in. Still, if this isn't satisfying, then you will need to get into fluid dynamics, at least a bit, since this is how one describes how fluid behaves when there's a pressure gradient, not diffusion.
Well, this is not an easy task!
First of all: you want to simulate basic diffusion OR the complete motion of the gas? The second case isn't easy at all, but you can get an idea here.
If you just want to diffuse a gas in an static environment, things are easier but you can't measure the
total pressure, your only variable will be the local concentration of the gas.
This phenomena is governed by the Fick laws; the second one is probably what you are looking for.
Read for finite difference methods to understand how to discretize the diffusion equation.
The subject is quite big to write a complete answer here.

How to check collisions on x-y axis

I'm writing a mobile robotics application in C/C++ in Ubuntu and, at the moment, I'm using a laser sensor to scan the environment and detect collisions with objects when the robot moves.
This laser has a scan area of 270° and a maximum radius of 4000mm.
It is able to detect an object within this range and to report their distance from the sensor.
Each distance is in planar coordinates, so to get more readeable data, I convert them from planar to cartesian coordinates and then I print them in a text file and then I plot them in MatLab to see what the laser had detected.
This picture shows a typical detection on cartesian coordinates.
Values are in meters, so 0.75 are 75 centimeters and 2 are two meters. Contiguous blue points are all the detected objects, while the points near (0,0) refer to the laser position and must be discarded. Blue points under y < 0 are produced since laser scan area is 270°; I added the red line square (1.5 x 2 meters) to determine the region within I want to implement the collisions check.
So, I would like to detect in realtime if there are points (objects) inside that area and, if yes, call some functions. This is a little bit tricky, because, this check should be able to detect also if there are contiguous points to determine if the object is real or not (i.e. if it detects a point, then it should search the nearest point to determine if they compose an object or if it's only a point which may be a detection error).
This is the function I use to perform a single scan:
struct point pt[limit*URG_POINTS];
//..
for(i = 0; i < limit; i++){
for(j = 0; j < URG_POINTS; j++){
ang2 = kDeg2Rad*((j*240/(double)URG_POINTS)-120);
offset = 0.03; //it depends on sensor module [m]
dis = (double) dist[cnt] / 1000.0;
//THRESHOLD of RANGE
// if(dis > MAX_RANGE) dis = 0; //MAX RANGE = 4[m]
// if(dis < MIN_RANGE) dis = 0;
pt[cnt].x = dis * cos(ang2) * cos(ang1) + (offset*sin(ang1)); // <-- X POINTS
pt[cnt].y = dis * sin(ang2); // <-- Y POINTS
// pt[cnt].z = dis * cos(ang2) * sin(ang1) - (offset*cos(ang1)); <- I disabled 3D mapping at the moment
cnt++;
}
ang1 += diff;
}
After each single scan, pt contains all the detected points in x-y coordinates.
I'd like to do something like this:
perform a single scan, then at the end,
apply collisions check on each pt.x and pt.y
if you find a point in the inner region, then check for other near points, if yes, stop the robot;
if not or if no other near points are found, start another scan
I'd like to know how to easy check objects (composed by more than one single point) inner the previous defined region.
Can you help me, please?
It seems very difficult for me :(
I don't think I can give a complete answer, but a few thoughts on where it might be possible to go.
What do you mean with realtime? How long may it take for any given algorithm to run? And what processor does your program run at?
Filtering the points that are within your area of detection should be quite easy just by checking if abs(x) < 0.75 and y< 2 && y > 0. Furthermore, you should only consider points that are far enough away from 0, so x^2 + y^2 > d.
But that should be the trivial part.
More interesting it will get to detect groups of points. DBSCAN has proven to be a fairly good clustering algorithm for detecting 2-dimensional groups of points. The critical question here is if DBSCAN is fast enough for real-time applications.
If not, you might have to think about optimizing the algorithm (You can press it's complexity to n*log(n) using some clever indexing structures).
Furthermore, it might be worth thinking about how you can incorporate the knowledge you have from your last iteration (assuming a high frequency, the data points should not change to much).
It might be worth looking at other robotics projects - I could imagine the problem of interpreting sensor data to construct information of the surroundings is a rather common one.
UPDATE
It is fairly difficult to give you good advice without knowing where you stumble on applying DBSCAN on your problem. But let me try to give you a step-by-step-guide how an algorithm may work:
For each datapoint you receive you check whether it is in the region you want to have observed. (The conditions I have given above should work).
If the datapoint is within the region you save it to some sort of list
After reading all data points you check if the list is empty. If so, everything is good. Otherwise we have to check if there are bigger groups of data points that you have to navigate around.
Now comes the more difficult part. You throw DBSCAN on that points and try to find groups of points. Which parameters will work for the algorithm I do not know - that has to be tried. After that you should have some clusters of points. I'm not totally sure what you will do with the groups - an idea would be to detect the points of each group that have the minimum and maximum degree in polar coordinates. That way you could decide how far you have to turn your vehicle. Special care would have to be taken if two groups are so close that it will not be possible to navigate through the gap between.
For the implementation of DBSCAN you could here or just ask google for help. It is a fairly common algorithm that has been coded thousands of times. For further optimizations concerning speed it might be helpful to create an own implementation. However, if one of the implementations you find seems to be usable, I would try that first before going all the way and implementing it on my own.
If you stumble on specific problems while implementing the algorithm I would suggest creating a new question, as it is far away from this one and you might get more people that are willing to help you.
I hope things are a bit clearer now. If not please give the exact point that you have doubts about.