How to set scan angle in CObservation2DRangeScan? - c++

I'm trying to use mrpt slam algorithm. I would like to adapt the original "icp slam app" to use lidar scans from my simulation. If I understand correctly I should use the CObservation2DRangeScan class to contain the lidar observations.
My problem is that I cannot find how to set the scan angle. I presume that the scan has to be in polar coordinates, then if setScanRange sets the range in meters, how do I set the angle?
I cannot find a proper member function within the class, I am probably missing something.
A code sample so far:
mrpt::obs::CObservation2DRangeScan::Ptr observation(new mrpt::obs::CObservation2DRangeScan);
observation->resizeScan(i32NUM_POINTS);
for(int32_t i = 0; i < i32NUM_POINTS; ++i)
{
observation->setScanRange(i, arrPoints[i].range);
//here I must set the scan angle
observation->setScanRangeValidity(i, true);
}
mrpt version: 2.2.1
Thank you in advance
Massimo

The angle is implicitly defined by the index of each range within the vector.
I just edited the class docs to better explain this.
Note that this code describes the exact relationship between indices and angles:
float Ang = -0.5f * aperture;
float dA = aperture / (m_scan.size() - 1);
if (!rightToLeft)
{
Ang = -Ang;
dA = -dA;
}
return Ang + dA * idx;
Also: Note that the program GridmapNavSimul already allows you to draw a gridmap world, drive a robot and generate simulated datasets without coding a single line... ;-)

Related

FBXSDK, using Quaternions to set rotation keys?

I am trying to write a file save application using the Autodesk FBXSDK. I have this working fine using Euler rotations, but I need to update it to use quaternions.
The relevant function is:
bool CreateScene(FbxScene* pScene, double lFocalLength, int startFrame)
{
//Create Camera
FbxNode* lMyCameraNode = FbxNode::Create(pScene, "p_camera");
//connect camera node to root node
FbxNode* lRootNode = pScene->GetRootNode();
lRootNode->ConnectSrcObject(lMyCameraNode);
FbxCamera* lMyCamera = FbxCamera::Create(pScene, "Root_camera");
lMyCameraNode->SetNodeAttribute(lMyCamera);
// Create an animation stack
FbxAnimStack* myAnimStack = FbxAnimStack::Create(pScene, "My stack");
// Create the base layer (this is mandatory)
FbxAnimLayer* pAnimLayer = FbxAnimLayer::Create(pScene, "Layer0");
myAnimStack->AddMember(pAnimLayer);
// Get the camera’s curve node for local translation.
FbxAnimCurveNode* myAnimCurveNodeRot = lMyCameraNode->LclRotation.GetCurveNode(pAnimLayer, true);
//create curve nodes
FbxAnimCurve* myRotXCurve = NULL;
FbxAnimCurve* myRotYCurve = NULL;
FbxAnimCurve* myRotZCurve = NULL;
FbxTime lTime; // For the start and stop keys. int lKeyIndex = 0; // Index for the keys that define the curve
// Get the animation curve for local rotation of the camera.
myRotXCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_X, true);
myRotYCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Y, true);
myRotZCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Z, true);
//This to add keys, per frame.
float frameNumber = startFrame;
for (int i = 0; i < rec.size(); i++)
{
lTime.SetFrame(frameNumber); //frame number
//rx
lKeyIndex = myRotXCurve->KeyAdd(lTime);
myRotXCurve->KeySet(lKeyIndex, lTime, recRotX[i], FbxAnimCurveDef::eInterpolationLinear);
//ry
lKeyIndex = myRotYCurve->KeyAdd(lTime);
myRotYCurve->KeySet(lKeyIndex, lTime, recRotY[i], FbxAnimCurveDef::eInterpolationLinear);
//rz
lKeyIndex = myRotZCurve->KeyAdd(lTime);
myRotZCurve->KeySet(lKeyIndex, lTime, recRotZ[i], FbxAnimCurveDef::eInterpolationLinear);
frameNumber += 1;
}
return true;
}
I would ideally like to pass in quaternion data here, instead of the euler x,y,z values. Is this possible with the fbxsdk? or do I need to convert my quaternion data first, and continue to pass in eulers?
Thank you.
You always need to go back to Euler angles, as you can only get animation curves for the XYZ rotation. The only thing you have control over is the rotation order.
However, you can use FbxQuaternion for your calculations, then use .DecomposeSphericalXYZ() to get XYZ Euler angles.
The accepted answer does not work. Although the documentation definitely implies it should,
Create an Euler XYZ equivalent to the current quaternion.
An Autodesk employee claims that it does not
DecomposeSphericalXYZ does not convert to Euler angles
and this is borne out by my testing. In the current FBX SDK, there are at least two relatively easy ways you can convert a quat to what they call an euler, or to something suitable for LclRotation. First is via FbxAMatrix
FbxQuaternion fq = ...;
FbxAMatrix fa;
fa.SetQ(fq);
FbxVector4 fe = fa.GetR();
Second is via FbxVector::SetXYZ
FbxVector4 fe2;
fe2.SetXYZ(fq);
I've successfully gone from an XYZ rotation sequence → quaternion → euler from both methods, and retrieved the same rotation sequence. When I use DecomposeSphericalXYZ I get a slightly different FbxVector4. I haven't tried to figure out what they mean by "euler in spherical coordinates".
Years later, I hit this issue again, and found a simple answer.
After you have set all the keys that you need, just use this filter:
FbxAnimCurveFilterUnroll filter;
filter.Apply(*myAnimCurveNodeRot);
This seems to function the same as the 'Euler Filter' in Maya, or the 'Gimbal Killer' filter in Motionbuilder.

How do I specify the specific the control points in a De Casteljau algorithm, if I am using nested for loops to iterate?

I know using De Casteljau's algorithm is not the best way to draw a Bezier curve, but I need to implement it for an assignment I am defining my algorithm based on the following equations (from Drexel).
Where:
defines the control points.
I am trying to define the function to do the algorithm, but am struggling with where/how to incorporate the control points. The control points are defined by the user; as they interact with the program, a left click adds a new control point. My function currently looks as follows:
2Dpoint deCast(float t)
{
2Dpoint tempDC // Temporary value of point passed back to OpenGL draw function
tempDC.x = 0; tempDC.y = 0 // Initialize temporary value
int r,i;
int n = C->B.size(); // C is pointer to B vector, which is where the control points are stored in a 2D vector
for (r = 1; r<n, r++)
{
for (i = 0; i<n-r; i++)
{
// Calculation of deCast points goes here
}
}
}
Where 2Dpoint is just a structure defined by a header file, C is a pointer to the location of the control points, which are stored in a 2Dpoint struct called B (i.e the i value of the control point vector is accessed by C -> B[i].x and C -> B[i].y). t is provided to the function when it is implemented in my draw function, as shown below.
void draw()
{
glColor3f(0.0f, 1.0f, 0.0f);
glLineWidth(2.0f);
glBegin(GL_LINE_STRIP);
float DCiter = 0;
while (DCiter <= 1.0)
{
2Dpoint DC = decast(DCiter);
glVertex2f(DC.x, DC.y);
DCiter = DCiter + 0.01;
}
}
You're going to have to pass the deCasteljau function your points, too, because that's the whole reason de Casteljau's algorithm works (which you even show in your mathematical formula: it reads "the next point = the weighted sum of two previous points"). Since you need to do linear interpolation between points, you need to work with points =)
In pseudo code:
deCastestljau (t, points):
// the following code is destructive, so remember to be sensible
points = shallowCopy(points);
// de Casteljau's algorithm in four lines:
while points.length > 1:
for i=0 to points.length-2:
points[i] = (1-t) * points[i] + (t) * points[i+1]
points.pop()
// and we're done.
return points[0]
The end: you start with your control points, then you do a linear interpolation pass so you get all the points "between successive coordinates" at distance ratio t (e.g. if t is 0.25, then you're finding each linear interpolation at 1/4th the distance from some point to the next).
That operation yields a new set of points 1 fewer than before. Then you do it again, and again, and again, until you're left with one point. That's your on curve point. This works for any order Bezier curve.
It also has pretty much nothing to do with "drawing" a curve, though, de Casteljau's algorithm is a way to efficiently compute single points on a curve, it makes no claims about, nor has any true relation to, drawing entire curves. That's a completely different task, for which de Casteljau's algorithm is just one of many options to find whatever on-curve points you do need to make your draw algorithm work.

Moving and Imported model along the forward Vector - DirectX11 C++

I have spent ages trying to work out the way to create the forward vector of any imported object, This is so that I can complete the players controls and also add billboarding. This is for a University project where I have been given a set GameObject class to load in the objects using a MeshLoader and OBJLoader class.
I currently am using A .GetWorld(); which returns me an XMFLOAT4X4 which should give me the data of the object in the world. I then convert it to a XMMATRIX using the XMLoadFloat4X4 and then grab each row of data from that matrix into its own XMVECTOR. I have tested these and Believe that they are in the correct order but am still having the problem of my character never moving when I hit the forward key. I wondered If I am doing anything completely wrong or if I have just missed something out.
else if (GetAsyncKeyState(VK_UP))
{
XMFLOAT4X4 currrentWorld = _zaz.GetWorld();
XMMATRIX currentWorldM = XMLoadFloat4x4(&currrentWorld);
XMVECTOR scale = currentWorldM.r[1];
XMVECTOR rotation = currentWorldM.r[2];
XMVECTOR translation = currentWorldM.r[3];
XMVECTOR forward = currentWorldM.r[4];
forward = XMVector3Normalize(forward);
translation = XMVectorAdd(translation, forward);
newWorld = XMMATRIX(scale, rotation, translation, forward);
_zaz.SetWorld(&newWorld);
_zaz.UpdateWorld();
}
The other problem is that “_zaz.UpdateWorld(); only seems to work inside my update method even though all Keyboard controls are checked on update.
void GameObject::UpdateWorld()
{
XMMATRIX scale = XMLoadFloat4x4(&_scale);
XMMATRIX rotate = XMLoadFloat4x4(&_rotate);
XMMATRIX translate = XMLoadFloat4x4(&_translate);
XMStoreFloat4x4(&_world, scale * rotate * translate);
}
void GameObject::SetWorld(XMMATRIX* world)
{
XMStoreFloat4x4(&_world, *world);
}
It is definitely reading in the values, I get the game to break on the key “up” being pressed which should run the above code. All of the values are set. The car does currently rotate and If I rotate the car slightly and then break the game only the rotation and forward vectors change, Which should be ok?
The watch of all of the vectors after the second iteration of the elseif keypressed up code is called
If you have any Ideas it would help massively.
I assumed that It would be similar to:
else if (GetAsyncKeyState(VK_CONTROL))
{
XMFLOAT4 position = _cameraFree.GetEye();
XMFLOAT4 direction = _cameraFree.GetForward();
position.x -= (direction.x * 0.05f);
position.y -= (direction.y * 0.05f);
position.z -= (direction.z * 0.05f);
_cameraFree.SetEye(position);
_cameraFree.CalculateViewProjection();
}
Which uses XMFLOAT4's for the cameras data, which was easy to do, With the XMFLOAT4X4's and XMMATRIX's that my GameObject rely on it has so far been impossible to get to work. If there is anymore info that I can supply to help then please say!
Hope this helps you help me.
The way you imagine currentWorldM is completely wrong.
First, you do an out of bound access, XMMATRIX::r is only 4 elements, the valid range is [0..3].
Second, a 4x4 matrix is not built like that, it contains everything merged, you have to see it more like a frame, with 3 vectors i j k and a position O. One of them is your forward vector, one up and one right ( or left ).
A scale matrix is for example in r[0].x r[1].y and r[2].z, and this is only true as long as you do not concatenate it with a rotation, after that, it will be spread over the full matrix.

Logistic regression for fault detection in an image

Basically, I want to detect a fault in an image using logistic regression. I'm hoping to get so feedback on my approach, which is as follows:
For training:
Take a small section of the image marked "bad" and "good"
Greyscale them, then break them up into a series of 5*5 pixel segments
Calculate the histogram of pixel intensities for each of these segments
Pass the histograms along with the labels to the Logistic Regression class for training
Break the whole image into 5*5 segments and predict "good"/"bad" for each segment.
Using the sigmod function the linear regression equation is:
1/ (1 - e^(xθ))
Where x is the input values and theta (θ) is the weights. I use gradient descent to train the network. My code for this is:
void LogisticRegression::Train(float **trainingSet,float *labels, int m)
{
float tempThetaValues[m_NumberOfWeights];
for (int iteration = 0; iteration < 10000; ++iteration)
{
// Reset the temp values for theta.
memset(tempThetaValues,0,m_NumberOfWeights*sizeof(float));
float error = 0.0f;
// For each training set in the example
for (int trainingExample = 0; trainingExample < m; ++trainingExample)
{
float * x = trainingSet[trainingExample];
float y = labels[trainingExample];
// Partial derivative of the cost function.
float h = Hypothesis(x) - y;
for (int i =0; i < m_NumberOfWeights; ++i)
{
tempThetaValues[i] += h*x[i];
}
float cost = h-y; //Actual J(theta), Cost(x,y), keeps giving NaN use MSE for now
error += cost*cost;
}
// Update the weights using batch gradient desent.
for (int theta = 0; theta < m_NumberOfWeights; ++theta)
{
m_pWeights[theta] = m_pWeights[theta] - 0.1f*tempThetaValues[theta];
}
printf("Cost on iteration[%d] = %f\n",iteration,error);
}
}
Where sigmoid and the hypothesis are calculated using:
float LogisticRegression::Sigmoid(float z) const
{
return 1.0f/(1.0f+exp(-z));
}
float LogisticRegression::Hypothesis(float *x) const
{
float z = 0.0f;
for (int index = 0; index < m_NumberOfWeights; ++index)
{
z += m_pWeights[index]*x[index];
}
return Sigmoid(z);
}
And the final prediction is given by:
int LogisticRegression::Predict(float *x)
{
return Hypothesis(x) > 0.5f;
}
As we are using a histogram of intensities the input and weight arrays are 255 elements. My hope is to use it on something like a picture of an apple with a bruise and use it to identify the brused parts. The (normalized) histograms for the whole brused and apple training sets look somthing like this:
For the "good" sections of the apple (y=0):
For the "bad" sections of the apple (y=1):
I'm not 100% convinced that using the intensites alone will produce the results I want but even so, using it on a clearly seperable data set isn't working either. To test it I passed it a, labeled, completely white and a completely black image. I then run it on the small image below:
Even on this image it fails to identify any segments as being black.
Using MSE I see that the cost is converging downwards to a point where it remains, for the black and white test it starts at about cost 250 and settles on 100. The apple chuncks start at about 4000 and settle on 1600.
What I can't tell is where the issues are.
Is, the approach sound but the implementation broken? Is logistic regression the wrong algorithm to use for this task? Is gradient decent not robust enough?
I forgot to answer this... Basically the problem was in my histograms which when generated weren't being memset to 0. As to the overall problem of whether or not logistic regression with greyscale images was a good solution, the answer is no. Greyscale just didn't provide enough information for good classification. Using all colour channels was a bit better but I think the complexity of the problem I was trying to solve (bruises in apples) was a bit much for simple logistic regression on its own. You can see the results on my blog here.

PID controllers for travelling an exact distance with velocity

I am trying to make a robot in a simulation move an exact distance by sending messages with a linear velocity. Right now my implementation doesn't make the robot move an exact distance. This is some sample code:
void Robot::travel(double x, double y)
{
// px and py are the current positions (constantly gets updated as the robot moves in the simulation)
// x and y is the target position that I want to go to
double startx = px;
double starty = py;
double distanceTravelled = 0;
align(x, y);
// This gets the distance from the robot's current position to the target position
double distance = calculateDistance(px, py, x, y);
// Message with velocity
geometry_msgs::Twist msg;
msg.linear.x = 1;
msg.angular.z = 0;
ros::Rate loop_rate(10);
while (distanceTravelled < distance)
{
distanceTravelled = calculateDistance(startx, starty, px, py);
// Publishes message telling the robot to move with the linear.x velocity
RobotVelocity_pub.publish(msg);
ros::spinOnce();
loop_rate.sleep();
}
}
I asked around and someone suggested that using a PID controller for a feedback loop would solve this problem, but after reading the Wikipedia page I don't really get how I'd use it in this context. The Wikipedia page has pseudo code for the PID algorithm, but I have no idea what corresponds to what.
previous_error = setpoint - process_feedback
integral = 0
start:
wait(dt)
error = setpoint - process_feedback
integral = integral + (error*dt)
derivative = (error - previous_error)/dt
output = (Kp*error) + (Ki*integral) + (Kd*derivative)
previous_error = error
goto start
How would I implement this in the context of velocities and distances? Is distance error? Or velocity? Can anyone help please? What is the integral? Derivative? Kp? Ki? Kd? e.t.c.
Thanks.
For your problem, the setpoint will be (x,y). process_feedback will be (px,py). and output will be the velocity at which you need to travel. Kp, Ki, and Kd are parameters that you can tune to get the kind of behavior you want. For example, if Kd is too low you could shoot past the target by not slowing down enough as you approach it.
A PID controller takes three things into consideration:
error: where you want to be vs. where you are
This is certainly a big factor. If you are at point A and your target is at point B, then the vector from A to B tells you a lot about how you need to steer, but it isn't the only factor.
derivative: how fast you are approaching
If you are approaching the target quickly and you are close to it, you actually need to slow down. The derivative helps take that into consideration.
integral: alignment error
Your robot may not actually do exactly what you tell it to do. The integral helps determine how much you need to compensate for that.