I have a map with some reference positions that correspond to the center (small cross) of some objects like this:
I take pictures to find my objects but in the pictures I have some noise so I can't always find all of the objects, it can be something like this:
From the few found positions I need to know where in the picture the other not found objects should be. I've being reading about this for the last couple of days and experimenting but I can't find a proper way of doing this. In some examples they start by calculating the center of masses and translating them together, then rotating, some other examples use least squares minimization and start by a rotation. I can't use OpenCV or any other APIs, just plain C++. I can use Eigen library if that helps. Can anyone give me some pointers on this?
EDIT:
I've solved the correspondence between points, the picture is never very different from the reference so for each found position I can search for its corresponding reference. In brief, I have one 2D matrix with reference points and another 2D matrix with found points. In the found matrix of points, the not found points are saved as NaN just to keep the same matrix size, the NaN points are not used in the calculations.
Since you have already matched the points to one another, finding the transform is straight forward:
Eigen::Affine2d findAffine(Eigen::Matrix2Xd const& refCloud, Eigen::Matrix2Xd const& targetCloud)
{
// get translation
auto refCom = centerOfMass(refCloud);
auto refAtOrigin = refCloud.colwise() - refCom;
auto targetCom = centerOfMass(targetCloud);
auto targetAtOrigin = targetCloud.colwise() - targetCom;
// get scale
auto scale = targetAtOrigin.rowwise().norm().sum() / refAtOrigin.rowwise().norm().sum();
// get rotation
auto covMat = refAtOrigin * targetAtOrigin.transpose();
auto svd = covMat.jacobiSvd(Eigen::ComputeFullU | Eigen::ComputeFullV);
auto rot = svd.matrixV() * svd.matrixU().transpose();
// combine the transformations
Eigen::Affine2d trans = Eigen::Affine2d::Identity();
trans.translate(targetCom).scale(scale).rotate(rot).translate(-refCom);
return trans;
}
refCloud is your reference point set and targetCloud is the set of points you have found in your image. It is important that the clouds match index wise, so refCloud[n] must be the corresponding point to targetCloud[n]. This means that you have to remove all NaNs from your matrix and cherry pick the correspondances in your reference point set.
Here is a full example. I'm using OpenCV to draw the stuff:
#include <Eigen/Dense>
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>
using Point = Eigen::Vector2d;
template <typename TMatrix>
Point centerOfMass(TMatrix const& points)
{
return points.rowwise().sum() / points.cols();
}
Eigen::Affine2d findAffine(Eigen::Matrix2Xd const& refCloud, Eigen::Matrix2Xd const& targetCloud)
{
// get translation
auto refCom = centerOfMass(refCloud);
auto refAtOrigin = refCloud.colwise() - refCom;
auto targetCom = centerOfMass(targetCloud);
auto targetAtOrigin = targetCloud.colwise() - targetCom;
// get scale
auto scale = targetAtOrigin.rowwise().norm().sum() / refAtOrigin.rowwise().norm().sum();
// get rotation
auto covMat = refAtOrigin * targetAtOrigin.transpose();
auto svd = covMat.jacobiSvd(Eigen::ComputeFullU | Eigen::ComputeFullV);
auto rot = svd.matrixV() * svd.matrixU().transpose();
// combine the transformations
Eigen::Affine2d trans = Eigen::Affine2d::Identity();
trans.translate(targetCom).scale(scale).rotate(rot).translate(-refCom);
return trans;
}
void drawCloud(cv::Mat& img, Eigen::Matrix2Xd const& cloud, Point const& origin, Point const& scale, cv::Scalar const& color, int thickness = cv::FILLED)
{
for (int c = 0; c < cloud.cols(); c++)
{
auto p = origin + cloud.col(c).cwiseProduct(scale);
cv::circle(img, {int(p.x()), int(p.y())}, 5, color, thickness, cv::LINE_AA);
}
}
int main()
{
// generate sample reference
std::vector<Point> points = {{4, 9}, {4, 4}, {6, 9}, {6, 4}, {8, 9}, {8, 4}, {10, 9}, {10, 4}, {12, 9}, {12, 4}};
Eigen::Matrix2Xd fullRefCloud(2, points.size());
for (int i = 0; i < points.size(); i++)
fullRefCloud.col(i) = points[i];
// generate sample target
Eigen::Matrix2Xd refCloud = fullRefCloud.leftCols(fullRefCloud.cols() * 0.6);
Eigen::Affine2d refTransformation = Eigen::Affine2d::Identity();
refTransformation.translate(Point(8, -4)).rotate(4.3).translate(-centerOfMass(refCloud)).scale(1.5);
Eigen::Matrix2Xd targetCloud = refTransformation * refCloud;
// find the transformation
auto transform = findAffine(refCloud, targetCloud);
std::cout << "Original: \n" << refTransformation.matrix() << "\n\nComputed: \n" << transform.matrix() << "\n";
// apply the computed transformation
Eigen::Matrix2Xd queryCloud = fullRefCloud.rightCols(fullRefCloud.cols() - refCloud.cols());
queryCloud = transform * queryCloud;
// draw it
Point scale = {15, 15}, origin = {100, 300};
cv::Mat img(600, 600, CV_8UC3);
cv::line(img, {0, int(origin.y())}, {800, int(origin.y())}, {});
cv::line(img, {int(origin.x()), 0}, {int(origin.x()), 800}, {});
drawCloud(img, refCloud, origin, scale, {0, 255, 0});
drawCloud(img, fullRefCloud, origin, scale, {255, 0, 0}, 1);
drawCloud(img, targetCloud, origin, scale, {0, 0, 255});
drawCloud(img, queryCloud, origin, scale, {255, 0, 255}, 1);
cv::flip(img, img, 0);
cv::imshow("img", img);
cv::waitKey();
return 0;
}
I managed to make it work with the code from here:
https://github.com/oleg-alexandrov/projects/blob/master/eigen/Kabsch.cpp
I'm calling the Find3DAffineTransform function and passing it my 2D maps, as this function expects 3D maps I've made all z coordinates = 0 and it works. If I have some time I'll try to adapt it to 2D.
Meanwhile a fellow programmer (Regis :-) found also this, that should work:
https://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60
Its the function umeyama() that returns the transformation between two point sets. Its part of Eigen library. Didn't have the time to test this.
There are two vectors "ImgCoordinatesList" & "ImgCoordinatesListCopy".
The first vector continuously receives values which are pushbacked.(x,y,Type - I have previously defined a struct which has int,int,string and its object is list_obj)
Once the function "ImgCreation" is called, the contents of the first vector should be copied into the "ImgCoordinatesListCopy" vector and the contents inside this copy vector should be used for further processing.
But the method what i did produced errors.
void SliceImageNew::BuildImgCoordinatesList(const long X, const long Y, string Type)
{
//Creates the Structure for one x,y,type coordinate
list_obj.x = X;
list_obj.y = Y;
list_obj.Coordinates_Type = Type;
//Pushes the structure into a vector which eventually forms a vector of structs
ImgCoordinatesList.push_back(list_obj);
}
void SliceImageNew::ImgCreation()
{
ImgCoordinatesListCopy.resize(ImgCoordinatesList.size());
copy(ImgCoordinatesList.begin(), ImgCoordinatesList.end(), ImgCoordinatesListCopy.begin());
//ImgCoordinatesListCopy = ImgCoordinatesList; //Copy the contents into another vector and create image with the copy vector
ImgCoordinatesList.erase(ImgCoordinatesList.begin(), ImgCoordinatesList.end());//Clear the vector after copying the contents into another vector
PlotImgCoordinates();
//SaveSliceImg();
//ClearImgCoordinatesList();
}
void SliceImageNew::PlotImgCoordinates()
{
static int SliceImgCount = 1;
Mat SliceImg(Size(1920, 1080), CV_16UC3); // Blank Image with White Background and 1920*1080 Dimensions
for (int i = 1; i!=ImgCoordinatesListCopy.size(); i++)
{
//Color differentiation between Mark and Jump Lines
if (ImgCoordinatesListCopy[i].Coordinates_Type == "Mark")
{
//cout << "This is a mark line" << endl;
line(SliceImg, Point(ImgCoordinatesListCopy[i - 1].x, ImgCoordinatesListCopy[i - 1].y), Point(ImgCoordinatesListCopy[i].x, ImgCoordinatesListCopy[i].y), Scalar(255, 255, 155), 4, 2, 0);
}
else
{
//cout << "This is a jump line" << endl;
line(SliceImg, Point(ImgCoordinatesListCopy[i - 1].x, ImgCoordinatesListCopy[i - 1].y), Point(ImgCoordinatesListCopy[i].x, ImgCoordinatesListCopy[i].y), Scalar(255, 100, 155), 4, 2, 0);
}
}
//Creating Legends for the Plot
putText(SliceImg, "Mark Line", cvPoint(1600, 40),
FONT_HERSHEY_SIMPLEX, 0.8, (255, 0, 0), 2);
line(SliceImg, Point(1540, 35), Point(1590, 35), Scalar(255, 255, 155), 4, 2, 0);
putText(SliceImg, "Jump Line", cvPoint(1600, 80),
FONT_HERSHEY_SIMPLEX, 0.8, (255, 0, 0), 2);
line(SliceImg, Point(1540, 75), Point(1590, 75), Scalar(255, 100, 155), 4, 2, 0);
//Providing unique names for every picture that is being saved
name << "Slice" << SliceImgCount << ".jpg";
// Saving the image
imwrite(name.str(), SliceImg);
SliceImgCount++; //Increment the count to provide unique names to the images
waitKey(0);
}
I have attached an image which shows the code and the error generated
While debugging, the highlighted line in the image produced that error!
Could someone help me?
Found the solution!
I had messed up with the object declaration of the "SliceImageNew" class. After correction it works fine and draws the image perfectly.
I'm trying to run a fuzzy inference system (FIS) application on a Freescale Kinetis K70F120M development board using CodeWarrior.
I wrote an Interpreter software that reads two plain text files (one that contains a Fuzzy Model and the other with patterns to be recognized) and writes a C++ application that can be uploaded to a development board to operate based on data read by sensors.
All the information contained in the FIS C++ application is written by the Interpreter, there's no chance I miscalculated the dimensions of a vector because the amount of elements were counted from the data contained in the files.
I've managed to run a FIS example application on the board but when I try to run the actual application I need to run I get the following 71 errors:
Description Resource Path Location Type
C:\Users\CRISTH~1\AppData\Local\Temp\ccLOvcxh.s co-processor offset out of range Prueba FALLAS 2 line 8696, external location: C:\Users\CRISTH~1\AppData\Local\Temp\ccLOvcxh.s C/C++ Problem
C:\Users\CRISTH~1\AppData\Local\Temp\ccLOvcxh.s co-processor offset out of range Prueba FALLAS 2 line 8697, external location: C:\Users\CRISTH~1\AppData\Local\Temp\ccLOvcxh.s C/C++ Problem
...
C:\Users\CRISTH~1\AppData\Local\Temp\ccLOvcxh.s co-processor offset out of range Prueba FALLAS 2 line 15897, external location: C:\Users\CRISTH~1\AppData\Local\Temp\ccLOvcxh.s C/C++ Problem
mingw32-make: *** [Sources/main.o] Error 1 Prueba FALLAS 2 C/C++ Problem
This is the code I'm trying to run:
#include "derivative.h"
#include "network.h"
int main() {
Network ReporteDeFallasCEC;
const int NInputs = 50, NClasses = 10, NRules = 32;
ReporteDeFallasCEC.initialize();
ReporteDeFallasCEC.setClassNeurons(NClasses);
ReporteDeFallasCEC.setRuleNeurons(NRules, 0, Network::Pruning());
Universe input1;
input1.setLimits(0, 100);
input1.addFuzzySet(FuzzySet("vs", 0, 17.52, 35.04));
input1.addFuzzySet(FuzzySet("s", 9.3, 29.65, 50));
input1.addFuzzySet(FuzzySet("m", 23.82, 45.24, 66.67));
input1.addFuzzySet(FuzzySet("l", 50, 66.67, 83.33));
input1.addFuzzySet(FuzzySet("vl", 66.67, 83.33, 100));
ReporteDeFallasCEC.addVariable(input1);
Universe input2;
input2.setLimits(0, 100);
input2.addFuzzySet(FuzzySet("vs", 0, 24.75, 49.51));
input2.addFuzzySet(FuzzySet("s", 5.45, 27.72, 50));
input2.addFuzzySet(FuzzySet("m", 33.33, 5, 6.67));
input2.addFuzzySet(FuzzySet("l", 50, 66.67, 83.33));
input2.addFuzzySet(FuzzySet("vl", 66.67, 83.33, 100));
ReporteDeFallasCEC.addVariable(input2);
//...here goes the remaining "universe" objects
Universe input50;
input50.setLimits(0, 100);
input50.addFuzzySet(FuzzySet("vs", 0, 22.79, 45.57));
input50.addFuzzySet(FuzzySet("s", 0, 34.55, 100));
input50.addFuzzySet(FuzzySet("m", 20.59, 43.63, 66.67));
input50.addFuzzySet(FuzzySet("l", 42.4, 68.49, 95.99));
input50.addFuzzySet(FuzzySet("vl", 66.67, 83.33, 100));
ReporteDeFallasCEC.addVariable(input50);
ClassNeuron InterferenciaDeGas;
ClassNeuron TuberiaDesancladaConGolpeDeFluido;
ClassNeuron RoturaDeVarilla;
ClassNeuron FugaEnLaValvulaFijaDePie;
ClassNeuron FugaEnLaValvulaViajera;
ClassNeuron BarrilDeLaBombaDoblado;
ClassNeuron BuenLlenadoConTuberiaAnclada;
ClassNeuron AgujeroEnElBarrilDeLaBomba;
ClassNeuron AnclaDeTuberiaEnMalFuncionamiento;
ClassNeuron BarrilDeLaBombaGastado;
RuleNeuron FLRule65248697 = RuleNeuron();
FLRule65248697.setNumInputsNeurons(NInputs);
FLRule65248697.setNumClasses(NClasses);
FLRule65248697.setAntecedent(0, 0);
FLRule65248697.setAntecedent(1, 0);
FLRule65248697.setAntecedent(2, 0);
FLRule65248697.setAntecedent(3, 0);
FLRule65248697.setAntecedent(4, 0);
FLRule65248697.setAntecedent(5, 0);
FLRule65248697.setAntecedent(6, 0);
FLRule65248697.setAntecedent(7, 0);
FLRule65248697.setAntecedent(8, 0);
FLRule65248697.setAntecedent(9, 0);
FLRule65248697.setAntecedent(10, 0);
FLRule65248697.setAntecedent(11, 2);
FLRule65248697.setAntecedent(12, 2);
FLRule65248697.setAntecedent(13, 3);
FLRule65248697.setAntecedent(14, 3);
FLRule65248697.setAntecedent(15, 3);
FLRule65248697.setAntecedent(16, 4);
FLRule65248697.setAntecedent(17, 4);
FLRule65248697.setAntecedent(18, 4);
FLRule65248697.setAntecedent(19, 4);
FLRule65248697.setAntecedent(20, 4);
FLRule65248697.setAntecedent(21, 4);
FLRule65248697.setAntecedent(22, 4);
FLRule65248697.setAntecedent(23, 4);
FLRule65248697.setAntecedent(24, 4);
FLRule65248697.setAntecedent(25, 3);
FLRule65248697.setAntecedent(26, 4);
FLRule65248697.setAntecedent(27, 4);
FLRule65248697.setAntecedent(28, 4);
FLRule65248697.setAntecedent(29, 4);
FLRule65248697.setAntecedent(30, 4);
FLRule65248697.setAntecedent(31, 4);
FLRule65248697.setAntecedent(32, 4);
FLRule65248697.setAntecedent(33, 4);
FLRule65248697.setAntecedent(34, 4);
FLRule65248697.setAntecedent(35, 4);
FLRule65248697.setAntecedent(36, 4);
FLRule65248697.setAntecedent(37, 4);
FLRule65248697.setAntecedent(38, 4);
FLRule65248697.setAntecedent(39, 4);
FLRule65248697.setAntecedent(40, 4);
FLRule65248697.setAntecedent(41, 4);
FLRule65248697.setAntecedent(42, 4);
FLRule65248697.setAntecedent(43, 4);
FLRule65248697.setAntecedent(44, 4);
FLRule65248697.setAntecedent(45, 4);
FLRule65248697.setAntecedent(46, 4);
FLRule65248697.setAntecedent(47, 4);
FLRule65248697.setAntecedent(48, 4);
FLRule65248697.setAntecedent(49, 4);
FLRule65248697.setConsecuent(1);
ReporteDeFallasCEC.addRule(FLRule65248697);
RuleNeuron FLRule50510248 = RuleNeuron();
FLRule50510248.setNumInputsNeurons(NInputs);
FLRule50510248.setNumClasses(NClasses);
FLRule50510248.setAntecedent(0, 0);
FLRule50510248.setAntecedent(1, 0);
FLRule50510248.setAntecedent(2, 0);
FLRule50510248.setAntecedent(3, 0);
FLRule50510248.setAntecedent(4, 0);
FLRule50510248.setAntecedent(5, 0);
FLRule50510248.setAntecedent(6, 0);
FLRule50510248.setAntecedent(7, 0);
FLRule50510248.setAntecedent(8, 0);
FLRule50510248.setAntecedent(9, 0);
FLRule50510248.setAntecedent(10, 0);
FLRule50510248.setAntecedent(11, 0);
FLRule50510248.setAntecedent(12, 0);
FLRule50510248.setAntecedent(13, 0);
FLRule50510248.setAntecedent(14, 2);
FLRule50510248.setAntecedent(15, 4);
FLRule50510248.setAntecedent(16, 4);
FLRule50510248.setAntecedent(17, 4);
FLRule50510248.setAntecedent(18, 4);
FLRule50510248.setAntecedent(19, 4);
FLRule50510248.setAntecedent(20, 4);
FLRule50510248.setAntecedent(21, 4);
FLRule50510248.setAntecedent(22, 4);
FLRule50510248.setAntecedent(23, 4);
FLRule50510248.setAntecedent(24, 4);
FLRule50510248.setAntecedent(25, 3);
FLRule50510248.setAntecedent(26, 4);
FLRule50510248.setAntecedent(27, 4);
FLRule50510248.setAntecedent(28, 4);
FLRule50510248.setAntecedent(29, 4);
FLRule50510248.setAntecedent(30, 4);
FLRule50510248.setAntecedent(31, 4);
FLRule50510248.setAntecedent(32, 4);
FLRule50510248.setAntecedent(33, 4);
FLRule50510248.setAntecedent(34, 4);
FLRule50510248.setAntecedent(35, 4);
FLRule50510248.setAntecedent(36, 4);
FLRule50510248.setAntecedent(37, 4);
FLRule50510248.setAntecedent(38, 4);
FLRule50510248.setAntecedent(39, 4);
FLRule50510248.setAntecedent(40, 4);
FLRule50510248.setAntecedent(41, 4);
FLRule50510248.setAntecedent(42, 4);
FLRule50510248.setAntecedent(43, 4);
FLRule50510248.setAntecedent(44, 4);
FLRule50510248.setAntecedent(45, 4);
FLRule50510248.setAntecedent(46, 4);
FLRule50510248.setAntecedent(47, 4);
FLRule50510248.setAntecedent(48, 4);
FLRule50510248.setAntecedent(49, 4);
FLRule50510248.setConsecuent(2);
ReporteDeFallasCEC.addRule(FLRule50510248);
//...here goes the remaining "RuleNeuron" objects
RuleNeuron FLRule2056998 = RuleNeuron();
FLRule2056998.setNumInputsNeurons(NInputs);
FLRule2056998.setNumClasses(NClasses);
FLRule2056998.setAntecedent(0, 0);
FLRule2056998.setAntecedent(1, 0);
FLRule2056998.setAntecedent(2, 0);
FLRule2056998.setAntecedent(3, 0);
FLRule2056998.setAntecedent(4, 0);
FLRule2056998.setAntecedent(5, 0);
FLRule2056998.setAntecedent(6, 0);
FLRule2056998.setAntecedent(7, 0);
FLRule2056998.setAntecedent(8, 0);
FLRule2056998.setAntecedent(9, 0);
FLRule2056998.setAntecedent(10, 0);
FLRule2056998.setAntecedent(11, 1);
FLRule2056998.setAntecedent(12, 1);
FLRule2056998.setAntecedent(13, 1);
FLRule2056998.setAntecedent(14, 1);
FLRule2056998.setAntecedent(15, 2);
FLRule2056998.setAntecedent(16, 2);
FLRule2056998.setAntecedent(17, 3);
FLRule2056998.setAntecedent(18, 3);
FLRule2056998.setAntecedent(19, 4);
FLRule2056998.setAntecedent(20, 3);
FLRule2056998.setAntecedent(21, 4);
FLRule2056998.setAntecedent(22, 3);
FLRule2056998.setAntecedent(23, 3);
FLRule2056998.setAntecedent(24, 4);
FLRule2056998.setAntecedent(25, 0);
FLRule2056998.setAntecedent(26, 0);
FLRule2056998.setAntecedent(27, 0);
FLRule2056998.setAntecedent(28, 0);
FLRule2056998.setAntecedent(29, 0);
FLRule2056998.setAntecedent(30, 0);
FLRule2056998.setAntecedent(31, 0);
FLRule2056998.setAntecedent(32, 0);
FLRule2056998.setAntecedent(33, 1);
FLRule2056998.setAntecedent(34, 1);
FLRule2056998.setAntecedent(35, 1);
FLRule2056998.setAntecedent(36, 1);
FLRule2056998.setAntecedent(37, 2);
FLRule2056998.setAntecedent(38, 2);
FLRule2056998.setAntecedent(39, 2);
FLRule2056998.setAntecedent(40, 2);
FLRule2056998.setAntecedent(41, 3);
FLRule2056998.setAntecedent(42, 3);
FLRule2056998.setAntecedent(43, 3);
FLRule2056998.setAntecedent(44, 4);
FLRule2056998.setAntecedent(45, 4);
FLRule2056998.setAntecedent(46, 4);
FLRule2056998.setAntecedent(47, 4);
FLRule2056998.setAntecedent(48, 3);
FLRule2056998.setAntecedent(49, 4);
FLRule2056998.setConsecuent(3);
ReporteDeFallasCEC.addRule(FLRule2056998);
const int nPatterns = 60;
double patternArray[nPatterns][NInputs] = {
{ 6.60, 9.70, 12.2, 4.60, 5.70, 8.70, 12.9, 7.60, 11.8, 7.90, 24.9, 44.3, 55.0, 63.6, 71.3, 75.0, 76.8, 80.9, 84.50, 86.50, 90.70, 91.40, 95.00, 97.30, 93.80, 69.20, 82.60, 95.80, 98.90, 97.20, 97.70, 97.80, 96.90, 97.50, 94.50, 93.50, 95.80, 92.50, 93.60, 94.20, 92.00, 90.20, 91.60, 90.20, 91.50, 91.80, 90.70, 93.90, 96.10, 95.70 },
{ 7.70, 5.50, 4.50, 0.60, 1.70, 5.90, 6.70, 6.70, 8.60, 10.1, 5.60, 5.30, 8.40, 24.3, 57.2, 79.8, 88.0, 90.8, 91.10, 91.90, 92.40, 91.80, 91.60, 91.40, 95.70, 62.20, 94.10, 96.40, 91.60, 92.40, 94.00, 97.40, 97.70, 97.80, 96.10, 94.40, 94.40, 96.00, 98.10, 99.30, 94.40, 94.50, 96.30, 96.80, 94.20, 94.50, 96.60, 98.70, 97.10, 97.30 },
//...here goes the remaining pattern rows
{ 1.95, 3.10, 2.00, 3.20, 1.90, 3.70, 5.70, 5.00, 2.10, 3.50, 3.30, 2.05, 2.95, 2.40, 2.70, 3.80, 4.10, 3.30, 4.700, 5.100, 5.300, 5.900, 8.600, 14.30, 38.70, 87.40, 94.80, 96.00, 97.00, 97.40, 98.40, 98.80, 98.10, 98.90, 96.50, 92.00, 83.80, 75.00, 67.80, 69.00, 77.00, 87.70, 94.90, 96.00, 95.80, 95.60, 95.70, 93.80, 87.60, 38.80 }, };
double pattern[NInputs];
for (int i = 0; i < nPatterns; i++) {
std::cout << "Patron " << i + 1 << ": [ ";
for (int j = 0; j < NInputs; j++) {
pattern[j] = patternArray[i][j];
std::cout << patternArray[i][j];
if (j != NInputs - 1) {
std::cout << ", ";
}
}
std::cout << " ]" << "\n";
int clase = ReporteDeFallasCEC.classificate(pattern);
std::cout << "\nClase [ " << clase << " ]\n\n";
}
//std::cin.get();
return 0;
}
This CodeWarrior Project was created in the CodeWarrior IDE v.10.6.4 as a new Bareboard Project:
Device to be used: MK70FN1M0 processor (K70F 120 MHz Family)
Project Type: Application
Connection to be used: Open Source JTAG
Language: C++
Floating Point: Hardware (-mfloat-abi=hard) vs. (-fp vfpv4)
I/O Support: Debugger Console
ARM Build Tools: GCC
Rapid Application Development: None
Start with perspective designed for: current perspective
I'm building the project with the FLASH configuration and debugging as "Prueba FALLAS 2_FLASH_OSJTAG".
I'm running CodeWarrior on Windows 7.
Help me find what's preventing the code from running on the board.
Update #1:
I've removed the code concerning the generation of the "ClassNeuron" and "RuleNeuron" objects and the patternArray vector including the functions that make use of that vector, so the remaining application is only creating the 50 "Universe" objects. After doing that I proceeded to remove a deliberate number of Universe objects to try to find out if there is a memory limit related issue, but wether it is 15 or 26 objects, I get a random amount of errors of the type described above (even 0 errors).
I need to run my application with that exact quantity of objects (50 Universes, 10 Classes, 32 Rules & the 50x60 Pattern Array).
I suspect that the problem has something to do with the amount of objects the code creates, but I'm not sure if there is a FLASH memory limit set by CodeWarrior when the project is compiled. Nevertheless, I'm pretty sure that these errors have nothing to do with array handling, for I have enterely removed any reference to the patternArray from the project and, still, if I were doing something wrong with arrays, the CodeWarrior IDE should have given me some kind of clue on that.
Help me solve this problem as it is difficult to find info on this matter, even in the NXP(freescale) community.
Update #2:
As stated in the following related questions It appears that this error is actually a compiler bug, please confirm this to me:
cross-compilation FFTW for cortex-a15 failure: co-processor offset out of range
iphone: co-processor offset out of range
This is my post at the NXP(formerly Freescale) Community: co-processor offset out of range
Partially Solved
I moved my code to a project in Kinetis Design Studio. It compiled with no errors and I could debug the application until some point when the board ran out of memory, then I applied some optimization changes and everything went great.
Since it suggests it may have something to do with array sizes I'd recommend checking on your array initializations first.
The error co-processor offset out of range doesn't relate to any of your variables, but there's array variable: double patternArray[nPatterns][NInputs]. Do these have correct number of elements?
i have a code for a graphic engine, it need to draw wire frames and line drawings, i made some adjustments to te original code of mine and now i get the error double free or corruption, but before the code worked just fine, does annybody know what i'm doing wrong?
void Wireframe::Generate(list<eye_point> &points, list<line> &lines, const ini::Configuration &configuration)
{
string buffer;
stringstream out;
for(int i = 0; i < nrFigures; i++)
{
figure = "Figure";
out<<i;
buffer = out.str();
figure.append(buffer);
out.str(string());
cout<<"de figure heeft de naam "<<figure<<endl;
Read_info(configuration);
Generate_points(points, configuration);
Generate_lines(lines, configuration);
}
}
in read info he reads the info from the ini file
void Wireframe::Generate_points(list<eye_point> &points, const ini::Configuration &configuration){
Matrix schaal = Scale(scale);
Matrix translate = Translatie(center);
Matrix xrotate = Rotate_x_as(rotatex);
Matrix yrotate = Rotate_y_as(rotatey);
Matrix zrotate = Rotate_z_as(rotatez);
Matrix eyematrix = Eye_transformatie(eye);
Matrix matrix;
matrix = schaal * translate * xrotate * yrotate * zrotate * eyematrix;
if(type.compare("LineDrawing") == 0)
{
linedrawing_point(points, configuration, matrix);
}
else if(type.compare("Cube") == 0)
{
cube_point(points,matrix);
}
}
void Wireframe::Generate_lines(list<line> &lines, const ini::Configuration &configuration){
if(type.compare("LineDrawing") == 0)
{
linedrawing_lines(lines, configuration);
}
else if (type.compare("Cube") == 0)
{
cube_lines(lines);
}
}
here he sees wha for line drawing he needs to do, were by line drawing works just fine, the error is in cube.
void Wireframe::cube_lines(list<line> &lines){
getline(lines, 1, 5);
getline(lines, 5, 3);
getline(lines, 3, 7);
getline(lines, 7, 1);
getline(lines, 5, 2);
getline(lines, 2, 8);
getline(lines, 8, 3);
getline(lines, 3, 5);
getline(lines, 2, 6);
getline(lines, 6, 4);
getline(lines, 4, 8);
getline(lines, 8, 2);
getline(lines, 6, 1);
getline(lines, 1, 7);
getline(lines, 7, 4);
getline(lines, 4, 6);
getline(lines, 7, 3);
getline(lines, 3, 8);
getline(lines, 8, 4);
getline(lines, 4, 7);
getline(lines, 1, 6);
getline(lines, 6, 2);
getline(lines, 2, 5);
getline(lines, 5, 1);
}
void Wireframe::cube_point(list<eye_point> &points, Matrix &matrix){
getpoint(1, -1, -1, points, 1, matrix );
getpoint(-1, 1, -1, points, 2, matrix );
getpoint(1, 1, 1, points, 3, matrix );
getpoint(-1, -1, 1, points, 4, matrix );
getpoint(1, 1, -1, points, 5, matrix );
getpoint(-1, -1, -1, points, 6, matrix );
getpoint(1, -1, 1, points, 7, matrix );
getpoint(-1, 1, 1, points, 1, matrix );
}
void Wireframe::projectie(Vector3D &vector_points, eye_point &point_element){
point_element.z = vector_points.z;
if(vector_points.z != 0)
{
point_element.x = vector_points.x / -vector_points.z;
point_element.y = vector_points.y / -vector_points.z;
}
else
{
point_element.x = vector_points.x;
point_element.y = vector_points.y;
}
}
void Wireframe::getpoint(double x, double y, double z, list<eye_point> &points, int nummer, Matrix &matrix ){
eye_point point_element;
Vector3D vector_points = Vector3D::point(x, y, z);
vector_points *= matrix;
point_element.figure = figure;
point_element.punt = nummer;
projectie(vector_points, point_element);
points.push_back(point_element);
}
void Wireframe::getline(list<line> &lines, int lijn0, int lijn1){
line line_element;
line_element.lijn0 = lijn0;
line_element.lijn1 = lijn1;
line_element.figure = figure;
line_element.linecolor = linecolor;
lines.push_back(line_element);
}
If you are on Windows, you might want to try AppVerifier tool which is free and is designed to detect double free errors http://msdn.microsoft.com/en-us/library/ms807121.aspx
The code posted doesn't directly do any allocation or freeing, so it's not relevant to your bug.
It's likely that the objects you're putting into containers (line and eye_point) have a bug. For instance, missing an assignment operator or copy constructor could lead to all sorts of baffling behavior.