I want to transform this data (I was told to do it from the object perspective). A list of the data is:
[0, -20.790001, -4.49] make up the acceleration xyz coordinates - accel(x,y,z).
[-0.762739, -3.364226, -8.962189] make up angle xyz coordinates - angle(x,y,z).
I am trying to use Rodrigues’ rotation formula or linear transformation matrix for rotation? Is this different with sensor data?
I am able to read the data from .csv, but am unsure how to transform into C++ and how to create a matrix in C++.
As long as you have a formula for transformation of the data, you just need to apply it. As for the matrix and creating one, there are multiple ways, either by using a double array:
float matrix[][] ( or matrix** if you want to use pointers )
or using a class (or struct, up to you) which contains the rows and columns
class Matrix
float rows[]
float columns[]
Good luck!
Note: just pseudo code definitely won't work out of the box, obviously
Related
I am looking for an idiomatic and efficient solution for this problem:
Let's say I have 3D Tensor where I want to represent an image with 100*100 pixels on 3 color channels,
Eigen::Tensor<int, 3> input(3,100,100);
The output I would like to get could be stored in
Eigen::Tensor<int, 4> output(3,3,100,100);
I would like to project the 3D input into the 4D output in a way that each color channel in the original tensor would have its own individual 3D tensor in the output, where each channel would contain the same values, that is
tensor(0,0,42,42) = tensor(0,1,42,42) = tensor(0,2,42,42)
tensor(0,0,12,12) = tensor(0,1,12,12) = tensor(0,2,12,12)
Illustrated on a picture:
Originally I wanted to solve this method:
Chip the individual color channels.
Broadcast the individual color channels into the size I need,
Reshape the broadcasted result into the desirable format(this is just a 3D Tensor at this point)
Concatenate the individual 3D Tensors into a big 4d one.
I have two problems with this approach.
Firstly, I just can not get the reshaping right, it always gives back a reshaped tensor with the dimensionality I want, but the coefficients get shuffled. I started to experiment with the layout of the Tensors, but it did not seem to help.
Secondly, this seems to be very tedious, I just feel like there should be a more convenient way to achieve this but I could not find any cue about that in the documentation.
I'm trying to implement AABBs/OOBBs with MathGeoLib since the ease to operate with BBs (and because I wanted to test some things with that library).
The problem is that the engine's objects transformations are based on glm since we started with glm (and they work properly) but when it comes to transform the OOBBs according to an object, it doesn't work very well.
What I basically do is to pass to a function the game object's translation, orientation and scale (I tried to pass a global matrix but it doesn't work, it seems to 'add' the transformation instead of setting it, and I can't access the oobb's matrix). That function does the next:
glm::vec3 pos = passedPosition - OOBBPreviousPos;
glm::mat4 Transformation = glm::translate(glm::mat4(1.0f), pos) *
glm::mat4_cast(passedRot) * glm::scale(glm::mat4(1.0f), passedScale);
glm::mat4 resMat = glm::transpose(Transformation);
math::float4x4 mat = math::float4x4::identity;
mat.Set(glm::value_ptr(resMat));
Which basically transposes the glm matrix (I have seen that that's they way of 'translating' them), passes it to a float* and then it constructs the MathGeoLib matrix with that. I have debugged it and the values seem to be right according to the object, so the next thing I do is actually transform the OOBB and then, enclose the AABB to have it inside, like this:
m_OBB.Transform(mat);
m_AABB.SetNegativeInfinity(); //Sets AABB to "null"
m_AABB.Enclose(m_OBB);
The final behaviour is pretty strange, believe me if I say that is the most close I've been from having it right, I've been some days testing different things and nothing works better (passing global/local matrices directly, trying different ways of passing/constructing transformation data, checking if the glm-MathGLib is correct...). It rotates but not around its own axis, and the scaling gets him crazy (although translation works). Its current behaviour can be seen here: https://gfycat.com/quarrelsomefineduck (blue cubes are AABBs, green ones are OOBBs).
Am I doing something wrong with the mathematics calculations or data transfer?
I still been looking on that but then some friend made me look into another direction, so I finally solved it (or better said: I "worked-around it") by storing an initial object's AABB and passing to the mentioned function the game object's global matrix. Then, inside the function, I used another MathGeoLib function to transform the OOBB.
That function finally looks like:
glm::mat4 resMat = glm::transpose(GlobalMatrixPassed);
math::float4x4 mat = math::float4x4::identity;
mat.Set(glm::value_ptr(resMat)); //"Translate" glm matrix passed into a MathGeoLib one
m_OOBB.SetFrom(m_InitialAABB); //Set OOBB from the initial aabb
m_OOBB.Transform(mat); //Transform it
m_AABB.SetFrom(m_OOBB); //Set the AABB in function of the transformed OOBB
I've got a blob with the shape n * t * h * w (Batch number, features, height, width). Within a Caffe layer I want to do an L1 normalization along the t axis, i.e. for fixed n, h and w the sum of values along t should be equal to 1. Normally this would be no big deal, but since it's within a Caffe layer it should happen very quickly, preferably through the Caffe math functions (based on BLAS). Is there a way to achieve this in an efficient manner?
I unfortunately can't change the order of the shape parameters due to later processing, but I can remove the batch number (have a vector of blobs with just t * h * w) or I could convert the blob to an OpenCV Mat, if it makes things easier.
Edit 1: I'm starting to suspect I might be able to solve my task with the help of caffe_gpu_gemm, where I'd multiply a vector of ones of length t with a blob from one batch of shape t * h * w, which should theoretically give me the sums along the feature axis. I'll update if I figure out the next step.
I am working on an algorithm with many computations done on a GPU. I'm working mainly with oclMat structures and am trying to avoid copying from CPU to GPU and vice versa, yet I cannot find an easy way to:
compare all elements in an ocl matrix to a specific single value (be it float or double, for instance) and create a logical matrix in accordance
create an oclMat matrix with a given size and type initialized with all elements to a specific value (for example all elements are float and equal to 1.234567)
For example:
cv::ocl::oclMat M1 =...
// DO STUFF WITH M1
cv::ocl::oclMat logicalM1 = M1>1.55; // compare directly to a single value
cv::ocl::oclMat logicalM2 = ... ; // i.e. I want a 100x100 CV_32FC1 matrix with all elements set to be equal to 1.234567
By reading the documentation, it seems using cv::ocl::compare only works with both matrices the same dimensions and type, so maybe my first request isn't feasible. On the other hand, I don't know how to initialize a specific matrix directly in ocl (with cv::Mat I know how it's done).
I assume an easy workaround exists, but haven't found one yet... Thanks!
You are right. Looks like cv::ocl::compare supports only two cv::oclMat on input.
But you can create oclMat filled with specific value as follows:
cv::ocl::oclMat logicalM2(M1.size(), M1.type);
logicalM2.setTo(cv::Scalar(1.234567));
cv::ocl::oclMat logicalM1;
cv::ocl::compare(M1, logicalM2, logicalM1, cv::CMP_GT);
P.S. Also I suggest you trying new OpenCV 3.0 with Transparent-API which makes processing on GPU using OpenCL much easier.
I am looking for a C++ equivalent to Matlab's griddata function, or any 2D global interpolation method.
I have a C++ code that uses Eigen 3. I will have an Eigen Vector that will contain x,y, and z values, and two Eigen matrices equivalent to those produced by Meshgrid in Matlab. I would like to interpolate the z values from the Vectors onto the grid points defined by the Meshgrid equivalents (which will extend past the outside of the original points a bit, so minor extrapolation is required).
I'm not too bothered by accuracy--it doesn't need to be perfect. However, I cannot accept NaN as a solution--the interpolation must be computed everywhere on the mesh regardless of data gaps. In other words, staying inside the convex hull is not an option.
I would prefer not to write an interpolation from scratch, but if someone wants to point me to pretty good (and explicit) recipe I'll give it a shot. It's not the most hateful thing to write (at least in an algorithmic sense), but I don't want to reinvent the wheel.
Effectively what I have is scattered terrain locations, and I wish to define a rectilinear mesh that nominally follows some distance beneath the topography for use later. Once I have the node points, I will be good.
My research so far:
The question asked here: MATLAB functions in C++ produced a close answer, but unfortunately the suggestion was not free (SciMath).
I have tried understanding the interpolation function used in Generic Mapping Tools, and was rewarded with a headache.
I briefly looked into the Grid Algorithms library (GrAL). If anyone has commentary I would appreciate it.
Eigen has an unsupported interpolation package, but it seems to just be for curves (not surfaces).
Edit: VTK has a matplotlib functionality. Presumably there must be an interpolation used somewhere in that for display purposes. Does anyone know if that's accessible and usable?
Thank you.
This is probably a little late, but hopefully it helps someone.
Method 1.) Octave: If you're coming from Matlab, one way is to embed the gnu Matlab clone Octave directly into the c++ program. I don't have much experience with it, but you can call the octave library functions directly from a cpp file.
See here, for instance. http://www.gnu.org/software/octave/doc/interpreter/Standalone-Programs.html#Standalone-Programs
griddata is included in octave's geometry package.
Method 2.) PCL: They way I do it is to use the point cloud library (http://www.pointclouds.org) and VoxelGrid. You can set x, and y bin sizes as you please, then set a really large z bin size, which gets you one z value for each x,y bin. The catch is that x,y, and z values are the centroid for the points averaged into the bin, not the bin centers (which is also why it works for this). So you need to massage the x,y values when you're done:
Ex:
//read in a list of comma separated values (x,y,z)
FILE * fp;
fp = fopen("points.xyz","r");
//store them in PCL's point cloud format
pcl::PointCloud<pcl::PointXYZ>::Ptr basic_cloud_ptr (new pcl::PointCloud<pcl::PointXYZ>);
int numpts=0;
double x,y,z;
while(fscanf(fp, "%lg, %lg, %lg", &x, &y, &z)!=EOF)
{
pcl::PointXYZ basic_point;
basic_point.x = x; basic_point.y = y; basic_point.z = z;
basic_cloud_ptr->points.push_back(basic_point);
}
fclose(fp);
basic_cloud_ptr->width = (int) basic_cloud_ptr->points.size ();
basic_cloud_ptr->height = 1;
// create object for result
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_filtered(new pcl::PointCloud<pcl::PointXYZ>());
// create filtering object and process
pcl::VoxelGrid<pcl::PointXYZ> sor;
sor.setInputCloud (basic_cloud_ptr);
//set the bin sizes here. (dx,dy,dz). for 2d results, make one of the bins larger
//than the data set span in that axis
sor.setLeafSize (0.1, 0.1, 1000);
sor.filter (*cloud_filtered);
So that cloud_filtered is now a point cloud that contains one point for each bin. Then I just make a 2-d matrix and go through the point cloud assigning points to their x,y bins if I want an image, etc. as would be produced by griddata. It works pretty well, and it's much faster than matlab's griddata for large datasets.