Disjoint Movement of Joints of Aldebaran Nao - c++

I am working on a locomotion system for the Aldebaran Nao. I noticed that my robot's motions are very disjoint as compared to those on other robots - a problem that I am pretty sure is code related.
I am updating my robots motions using code similar to Aldebaran's fast get set DCM.
(http://doc.aldebaran.com/1-14/dev/cpp/examples/sensors/fastgetsetdcm/fastgetsetexample.html).
I am updating the joint angles every 10 ms (the fastest possible update rate). However, it is clear that the motors move to the newly commanded angle very quickly and are motionless for the majority of the 10 ms. Is there any way to control the velocity of the motors during this 10ms update period?

There's many way to send order to joints using DCM or ALMotion.
The easiest way is using ALMotion, with that you can use
angleInterpolationWithSpeed, where you'll specify a ratio of speed.
angleInterpolation, where you'll specify a time. In this example 0.01 sec.
Using the DCM, you just have to ask for the movement to finish 10ms later.
The ALMotion and DCM documentation are well wrotten, you should have a look...

Related

How to do transformation to get correct linear velocity from linear acceleration IMU data?

I have IMU sensor that gives me the raw data such as orientation, Angular and Linear acceleration. Im using ROS and doing some Gazebo UUV simulation. Furthermore, I want to get linear velocity from the raw IMU data. If I do integration over time there will be accumulated error and will not be accurate with the time when for example the robot makes turns.
So If I use
acceleration_x = (msg->linear_acceleration.x + 9.81 * sin(pitch)) * cos(pitch);
acceleration_y = (msg->linear_acceleration.y - 9.81 * sin(roll)) * cos(roll);
So integrating linear acceleration is very bad,
Velocity_x= Velocity_old_x+acceleration_x*dt;
because integrates the acceleration without taking into account any possible rotation of the sensor, which means that the results will probably be terrible if the sensor rotates at all.
So I need some ROS package that takes into account all this transformation and gives me the most accurate estimation of the linear velocity. Any Help? Thanks
I would first recommend that you try fitting your input sensor data into an EKF or UKF node from the robot_localization package. This package is the most used & most optimized pose estimation package in the ROS ecosystem.
It is designed to handle 3D sensor input, but you would have to configure the parameters (there are no real defaults, all config). Besides the configuration docs above, the github has good examples of yaml parameter configurations (Ex.) (you'll want a separate file from the launch file) and example launch files (Ex.).
If you're talking about minimizing accumulated error, feeding IMU or odometry-velocity data into an EKF/UKF will give you the odom->base_link frame transform, and that is the best you can do, by definition. Absolute pose error will creep in and accumulate, unless you have an absolute reference frame measurement. (Ex GPS or camera/lidar processed position estimate). Specifc to how you asked for velocity, stepping back a derivative, unless you have an absolute reference frame estimate for your velocity or pose, you will have accumulated error just integrating your acceleration, and that is the best you can do, by definition.
If it's an underwater robot, you may be able to get a velocity / water flow speed sensor attached to your vehicle. Or you may be able to use camera / lidar / sonar with processing to get an absolute reference frame or at least a position difference between execution cycles. Otherwise your precision & results are limited to the sensors you have.

Linear Interpolation and Object Collision

I have a physics engine that uses AABB testing to detect object collisions and an animation system that does not use linear interpolation. Because of this, my collisions act erratically at times, especially at high speeds. Here is a glaringly obvious problem in my system...
For the sake of demonstration, assume a frame in our animation system lasts 1 second and we are given the following scenario at frame 0.
At frame 1, the collision of the objects will not bet detected, because c1 will have traveled past c2 on the next draw.
Although I'm not using it, I have a bit of a grasp on how linear interpolation works because I have used linear extrapolation in this project in a different context. I'm wondering if linear interpolation will solve the problems I'm experiencing, or if I will need other methods as well.
There is a part of me that is confused about how linear interpolation is used in the context of animation. The idea is that we can achieve smooth animation at low frame rates. In the above scenario, we cannot simply just set c1 to be centered at x=3 in frame 1. In reality, they would have collided somewhere between frame 0 and frame 1. Does linear interpolation automatically take care of this and allow for precise AABB testing? If not, what will it solve and what other methods should I look into to achieve smooth and precise collision detection and animation?
The phenomenon you are experiencing is called tunnelling, and is a problem inherent to discrete collision detection architectures. You are correct in feeling that linear interpolation may have something to do with the solution as it can allow you to, within a margin of error (usually), predict the path of an object between frames, but this is just one piece of a much larger solution. The terminology I've seen associated with these types of solutions is "Continuous Collision Detection". The topic is large and gets quite complex, and there are books that discuss it, such as Real Time Collision Detection and other online resources.
So to answer your question: no, linear interpolation on its own won't solve your problems*. Unless you're only dealing with circles or spheres.
What to Start Thinking About
The way the solutions look and behave are dependant on your design decisions and are generally large. So just to point in the direction of the solution, the fundamental idea of continuous collision detection is to figure out: How far between the early frame and the later frame does the collision happen, and in what position and rotation are the two objects at this point. Then you must calculate the configuration the objects will be in at the later frame time in response to this. Things get very interesting addressing these problems for anything other than circles in two dimensions.
I haven't implemented this but I've seen described a solution where you march the two candidates forward between the frames, advancing their position with linear interpolation and their orientation with spherical linear interpolation and checking with discrete algorithms whether they're intersecting (Gilbert-Johnson-Keerthi Algorithm). From here you continue to apply discrete algorithms to get the smallest penetration depth (Expanding Polytope Algorithm) and pass that and the remaining time between the frames, along to a solver to get how the objects look at your later frame time. This doesn't give an analytic answer but I don't have knowledge of an analytic answer for generalized 2 or 3D cases.
If you don't want to go down this path, your best weapon in the fight against complexity is assumptions: If you can assume your high velocity objects can be represented as a point things get easier, if you can assume the orientation of the objects doesn't matter (circles, spheres) things get easier, and it keeps going and going. The topic is beyond interesting and I'm still on the path of learning it, but it has provided some of the most satisfying moments in my programming period. I hope these ideas get you on that path as well.
Edit: Since you specified you're working on a billiard game.
First we'll check whether discrete or continuous is needed
Is any amount of tunnelling acceptable in this game? Not in billiards
no.
What is the speed at which we will see tunnelling? Using a 0.0285m
radius for the ball (standard American) and a 0.01s physics step, we
get 2.85m/s as the minimum speed that collisions start giving bad
response. I'm not familiar with the speed of billiard balls but that
number feels too low.
So just checking on every frame if two of the balls are intersecting is not enough, but we don't need to go completely continuous. If we use interpolation to subdivide each frame we can increase the velocity needed to create incorrect behaviour: With 2 subdivisions we get 5.7m/s, which is still low; 3 subdivisions gives us 8.55m/s, which seems reasonable; and 4 gives us 11.4m/s which feels higher than I imagine billiard balls are moving. So how do we accomplish this?
Discrete Collisions with Frame Subdivisions using Linear Interpolation
Using subdivisions is expensive so it's worth putting time into candidate detection to use it only where needed. This is another problem with a bunch of fun solutions, and unfortunately out of scope of the question.
So you have two candidate circles which will very probably collide between the current frame and the next frame. So in pseudo code the algorithm looks like:
dt = 0.01
subdivisions = 4
circle1.next_position = circle1.position + (circle1.velocity * dt)
circle2.next_position = circle2.position + (circle2.velocity * dt)
for i from 0 to subdivisions:
temp_c1.position = interpolate(circle1.position, circle1.next_position, (i + 1) / subdivisions)
temp_c2.position = interpolate(circle2.position, circle2.next_position, (i + 1) / subdivisions)
if intersecting(temp_c1, temp_c2):
intersection confirmed
no intersection
Where the interpolate signature is interpolate(start, end, alpha)
So here you have interpolation being used to "move" the circles along the path they would take between the current and the next frame. On a confirmed intersection you can get the penetration depth and pass the delta time (dt / subdivisions), the two circles, the penetration depth and the collision points along to a resolution step that determines how they should respond to the collision.

How to process data at less than camera's frame per second ability?

i am not sure of how to put my question properly so here it goes.
I am running an object detection algorithm which runs at 40 frame per seconds (fps) and fitted on a camera which acts as an 'eye' on a robot. Then, I process the information which is received from the algorithm and pass the actions to my robot.
The issue is each time, the algorithm runs, it gives me slightly new reading. I guess its because as it processes data every 40 times per second, it will give new information. But I don't need new information if my robot doesn't move as most of the objects are at the same position at the previous frame.
My question, how can i only enhance my algorithm to only give me information each time if there is a change in object positions? by comparing last frame reading with current frame reading for example
I think you should try to find the motion estimation of the image ,I think MPG-4 video is using an algorithm like that.
http://www.img.lx.it.pt/~fp/cav/Additional_material/MPEG4_video.pdf
But if you don't want something so sophisticated and you just want to be see if the second image is the sane with the first one just substract them and see the differance. You can also use a Gaussian filter to cut the high frequencies and subtract them and also put a threshhold to check if you want do the procesing or not

opencv - prediction collision of ball

I want to do a project, which will consist in detecting possible collision of the pool balls, using opencv, webcam and C++ programming language. For now I just want to prediction collision of 2 balls on minibilard table. I detect them by change rgb to hsv and then use thereshold, in future i will probably use another method for detect a random amount of balls, but it's not so important now.
So, for now I can detect two balls, i know their position, radius, now I'm thinking how to predict whether there will be a collision between them, if we if we assume that they will move in straight lines. I think that I should check their position in every frame update (and i have to know a time between frames in my webcamera) and by that, i I will be able to determine the value of speed, acceleration and direction of the ball. So, if i will know those parameters for for both balls, I will be able to determine where can they collide, and then, using parametric equastion I will be able to check, if they will be on collision point on the same time.
I wonder if this is the right approach to the problem, maybe there is a simpler and more effective method to do this?
Thanks for any kind of help.
Karol
This sounds like you are on track for a good project...
Calculating acceleration seems, from what I briefly read here, reasonably difficult though. So as a preliminary step, you could just assume a constant velocity. So take the difference between a balls position last frame and current frame as a vector and add it on to the current frames position to find where it will be next frame. Doing this for both balls will allow you to check for a collision.
You can check for a collision by comparing the distance between the balls centers using Pythagoras to the sum of their radii. If the sum of their radii will be greater than the distances between their centers, you have a collision.
Obviously, calculating one frame ahead is not very useful, but if you assume a constant velocity or manage to calculate their acceleration, there is no reason to why you can't calculate 30 or 100 frames in the future with this method.
I recently made a billiards ball simulation in javascript which you could take a quick look at here if you want to see how this could work.

Non-real time simulation of overlapping repulsive balls

I want to make a non-real time simulation of overlapping repulsive balls, initially in 2D and later in 3D.
First of all, consider a simple closed domain for simplicity. In practice, domains will be complex and irregular but always closed.
The balls on the boundaries are fixed and could be overlapping. A fixed ball duplicates itself to produce a free ball of the same size whenever no other ball overlaps it. Both fixed and free balls repel each other but fixed balls cannot move. Note that, duplicant ball should be sufficiently tilted to start repulsion. In elastic colliding balls case, after two balls collide they change direction with some velocity but in this case the balls can stop quickly once they stop overlapping. Free balls move until there is no motion or let's say we solve motion problem until convergence. Then each fixed ball produce a free ball again and this process goes on until no fixed ball can duplicate due to being overlapped by any other ball(s).
I think GPU (CUDA) would be faster to solve this problem but initially I am thinking to write on CPU. However, before proceeding to coding I would like to know "feasibility" of this work. That is, considering a million of balls, approximately how long it would take to simulate this or similar kind of problems in non-real time. For a million of balls, if solution time is in order of minutes, I will dive into the problem.
You might look into using Box2D for a prototype. Setting up your collision constraints to be 'soft' would give you about the kind of behavior you're showing in your diagrams.
As for simulating a million objects in real time, you're going to be working on a GPU.