There are two vectors "ImgCoordinatesList" & "ImgCoordinatesListCopy".
The first vector continuously receives values which are pushbacked.(x,y,Type - I have previously defined a struct which has int,int,string and its object is list_obj)
Once the function "ImgCreation" is called, the contents of the first vector should be copied into the "ImgCoordinatesListCopy" vector and the contents inside this copy vector should be used for further processing.
But the method what i did produced errors.
void SliceImageNew::BuildImgCoordinatesList(const long X, const long Y, string Type)
{
//Creates the Structure for one x,y,type coordinate
list_obj.x = X;
list_obj.y = Y;
list_obj.Coordinates_Type = Type;
//Pushes the structure into a vector which eventually forms a vector of structs
ImgCoordinatesList.push_back(list_obj);
}
void SliceImageNew::ImgCreation()
{
ImgCoordinatesListCopy.resize(ImgCoordinatesList.size());
copy(ImgCoordinatesList.begin(), ImgCoordinatesList.end(), ImgCoordinatesListCopy.begin());
//ImgCoordinatesListCopy = ImgCoordinatesList; //Copy the contents into another vector and create image with the copy vector
ImgCoordinatesList.erase(ImgCoordinatesList.begin(), ImgCoordinatesList.end());//Clear the vector after copying the contents into another vector
PlotImgCoordinates();
//SaveSliceImg();
//ClearImgCoordinatesList();
}
void SliceImageNew::PlotImgCoordinates()
{
static int SliceImgCount = 1;
Mat SliceImg(Size(1920, 1080), CV_16UC3); // Blank Image with White Background and 1920*1080 Dimensions
for (int i = 1; i!=ImgCoordinatesListCopy.size(); i++)
{
//Color differentiation between Mark and Jump Lines
if (ImgCoordinatesListCopy[i].Coordinates_Type == "Mark")
{
//cout << "This is a mark line" << endl;
line(SliceImg, Point(ImgCoordinatesListCopy[i - 1].x, ImgCoordinatesListCopy[i - 1].y), Point(ImgCoordinatesListCopy[i].x, ImgCoordinatesListCopy[i].y), Scalar(255, 255, 155), 4, 2, 0);
}
else
{
//cout << "This is a jump line" << endl;
line(SliceImg, Point(ImgCoordinatesListCopy[i - 1].x, ImgCoordinatesListCopy[i - 1].y), Point(ImgCoordinatesListCopy[i].x, ImgCoordinatesListCopy[i].y), Scalar(255, 100, 155), 4, 2, 0);
}
}
//Creating Legends for the Plot
putText(SliceImg, "Mark Line", cvPoint(1600, 40),
FONT_HERSHEY_SIMPLEX, 0.8, (255, 0, 0), 2);
line(SliceImg, Point(1540, 35), Point(1590, 35), Scalar(255, 255, 155), 4, 2, 0);
putText(SliceImg, "Jump Line", cvPoint(1600, 80),
FONT_HERSHEY_SIMPLEX, 0.8, (255, 0, 0), 2);
line(SliceImg, Point(1540, 75), Point(1590, 75), Scalar(255, 100, 155), 4, 2, 0);
//Providing unique names for every picture that is being saved
name << "Slice" << SliceImgCount << ".jpg";
// Saving the image
imwrite(name.str(), SliceImg);
SliceImgCount++; //Increment the count to provide unique names to the images
waitKey(0);
}
I have attached an image which shows the code and the error generated
While debugging, the highlighted line in the image produced that error!
Could someone help me?
Found the solution!
I had messed up with the object declaration of the "SliceImageNew" class. After correction it works fine and draws the image perfectly.
Related
I have a map with some reference positions that correspond to the center (small cross) of some objects like this:
I take pictures to find my objects but in the pictures I have some noise so I can't always find all of the objects, it can be something like this:
From the few found positions I need to know where in the picture the other not found objects should be. I've being reading about this for the last couple of days and experimenting but I can't find a proper way of doing this. In some examples they start by calculating the center of masses and translating them together, then rotating, some other examples use least squares minimization and start by a rotation. I can't use OpenCV or any other APIs, just plain C++. I can use Eigen library if that helps. Can anyone give me some pointers on this?
EDIT:
I've solved the correspondence between points, the picture is never very different from the reference so for each found position I can search for its corresponding reference. In brief, I have one 2D matrix with reference points and another 2D matrix with found points. In the found matrix of points, the not found points are saved as NaN just to keep the same matrix size, the NaN points are not used in the calculations.
Since you have already matched the points to one another, finding the transform is straight forward:
Eigen::Affine2d findAffine(Eigen::Matrix2Xd const& refCloud, Eigen::Matrix2Xd const& targetCloud)
{
// get translation
auto refCom = centerOfMass(refCloud);
auto refAtOrigin = refCloud.colwise() - refCom;
auto targetCom = centerOfMass(targetCloud);
auto targetAtOrigin = targetCloud.colwise() - targetCom;
// get scale
auto scale = targetAtOrigin.rowwise().norm().sum() / refAtOrigin.rowwise().norm().sum();
// get rotation
auto covMat = refAtOrigin * targetAtOrigin.transpose();
auto svd = covMat.jacobiSvd(Eigen::ComputeFullU | Eigen::ComputeFullV);
auto rot = svd.matrixV() * svd.matrixU().transpose();
// combine the transformations
Eigen::Affine2d trans = Eigen::Affine2d::Identity();
trans.translate(targetCom).scale(scale).rotate(rot).translate(-refCom);
return trans;
}
refCloud is your reference point set and targetCloud is the set of points you have found in your image. It is important that the clouds match index wise, so refCloud[n] must be the corresponding point to targetCloud[n]. This means that you have to remove all NaNs from your matrix and cherry pick the correspondances in your reference point set.
Here is a full example. I'm using OpenCV to draw the stuff:
#include <Eigen/Dense>
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>
using Point = Eigen::Vector2d;
template <typename TMatrix>
Point centerOfMass(TMatrix const& points)
{
return points.rowwise().sum() / points.cols();
}
Eigen::Affine2d findAffine(Eigen::Matrix2Xd const& refCloud, Eigen::Matrix2Xd const& targetCloud)
{
// get translation
auto refCom = centerOfMass(refCloud);
auto refAtOrigin = refCloud.colwise() - refCom;
auto targetCom = centerOfMass(targetCloud);
auto targetAtOrigin = targetCloud.colwise() - targetCom;
// get scale
auto scale = targetAtOrigin.rowwise().norm().sum() / refAtOrigin.rowwise().norm().sum();
// get rotation
auto covMat = refAtOrigin * targetAtOrigin.transpose();
auto svd = covMat.jacobiSvd(Eigen::ComputeFullU | Eigen::ComputeFullV);
auto rot = svd.matrixV() * svd.matrixU().transpose();
// combine the transformations
Eigen::Affine2d trans = Eigen::Affine2d::Identity();
trans.translate(targetCom).scale(scale).rotate(rot).translate(-refCom);
return trans;
}
void drawCloud(cv::Mat& img, Eigen::Matrix2Xd const& cloud, Point const& origin, Point const& scale, cv::Scalar const& color, int thickness = cv::FILLED)
{
for (int c = 0; c < cloud.cols(); c++)
{
auto p = origin + cloud.col(c).cwiseProduct(scale);
cv::circle(img, {int(p.x()), int(p.y())}, 5, color, thickness, cv::LINE_AA);
}
}
int main()
{
// generate sample reference
std::vector<Point> points = {{4, 9}, {4, 4}, {6, 9}, {6, 4}, {8, 9}, {8, 4}, {10, 9}, {10, 4}, {12, 9}, {12, 4}};
Eigen::Matrix2Xd fullRefCloud(2, points.size());
for (int i = 0; i < points.size(); i++)
fullRefCloud.col(i) = points[i];
// generate sample target
Eigen::Matrix2Xd refCloud = fullRefCloud.leftCols(fullRefCloud.cols() * 0.6);
Eigen::Affine2d refTransformation = Eigen::Affine2d::Identity();
refTransformation.translate(Point(8, -4)).rotate(4.3).translate(-centerOfMass(refCloud)).scale(1.5);
Eigen::Matrix2Xd targetCloud = refTransformation * refCloud;
// find the transformation
auto transform = findAffine(refCloud, targetCloud);
std::cout << "Original: \n" << refTransformation.matrix() << "\n\nComputed: \n" << transform.matrix() << "\n";
// apply the computed transformation
Eigen::Matrix2Xd queryCloud = fullRefCloud.rightCols(fullRefCloud.cols() - refCloud.cols());
queryCloud = transform * queryCloud;
// draw it
Point scale = {15, 15}, origin = {100, 300};
cv::Mat img(600, 600, CV_8UC3);
cv::line(img, {0, int(origin.y())}, {800, int(origin.y())}, {});
cv::line(img, {int(origin.x()), 0}, {int(origin.x()), 800}, {});
drawCloud(img, refCloud, origin, scale, {0, 255, 0});
drawCloud(img, fullRefCloud, origin, scale, {255, 0, 0}, 1);
drawCloud(img, targetCloud, origin, scale, {0, 0, 255});
drawCloud(img, queryCloud, origin, scale, {255, 0, 255}, 1);
cv::flip(img, img, 0);
cv::imshow("img", img);
cv::waitKey();
return 0;
}
I managed to make it work with the code from here:
https://github.com/oleg-alexandrov/projects/blob/master/eigen/Kabsch.cpp
I'm calling the Find3DAffineTransform function and passing it my 2D maps, as this function expects 3D maps I've made all z coordinates = 0 and it works. If I have some time I'll try to adapt it to 2D.
Meanwhile a fellow programmer (Regis :-) found also this, that should work:
https://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60
Its the function umeyama() that returns the transformation between two point sets. Its part of Eigen library. Didn't have the time to test this.
I am playing around with the QCustomPlot libary for Qt. I created some plots succesfully. But I got still questions:
1: How can I set my y-axis range from 0% to 100%?
2: My tick labels are centered below the ticks. How can I change that to a left alignment?
Thanks for your help.
Peter
// generate some data:
QVector<double> x(101), y(101); // initialize with entries 0..100
for (int i=0; i<101; ++i)
{
x[i] = (i*960); // x goes from -1 to 1
y[i] = x[i]/96000.0; // let's plot a quadratic function
}
// create graph and assign data to it:
ui->customPlot->addGraph();
ui->customPlot->graph(0)->setData(x, y);
// set axes ranges, so we see all data:
ui->customPlot->xAxis->setTickLabelType(QCPAxis::ltDateTime);
ui->customPlot->xAxis->setDateTimeSpec(Qt::UTC);
ui->customPlot->xAxis->setDateTimeFormat("hh:mm");
ui->customPlot->xAxis->setAutoTickStep(false);
ui->customPlot->xAxis->setTickStep(3600);
ui->customPlot->xAxis->setRange(0, 86399);
ui->customPlot->yAxis->setRange(0, 1);
ui->customPlot->replot();
ui->customPlot->xAxis->setBasePen(QPen(Qt::white, 1));
ui->customPlot->yAxis->setBasePen(QPen(Qt::white, 1));
ui->customPlot->xAxis->setTickPen(QPen(Qt::white, 1));
ui->customPlot->yAxis->setTickPen(QPen(Qt::white, 1));
ui->customPlot->xAxis->setSubTickPen(QPen(Qt::white, 1));
ui->customPlot->yAxis->setSubTickPen(QPen(Qt::white, 1));
ui->customPlot->xAxis->setTickLabelColor(Qt::white);
ui->customPlot->yAxis->setTickLabelColor(Qt::white);
ui->customPlot->xAxis->grid()->setPen(QPen(QColor(140, 140, 140), 1, Qt::DotLine));
ui->customPlot->yAxis->grid()->setPen(QPen(QColor(140, 140, 140), 1, Qt::DotLine));
ui->customPlot->xAxis->grid()->setSubGridPen(QPen(QColor(80, 80, 80), 1, Qt::DotLine));
ui->customPlot->yAxis->grid()->setSubGridPen(QPen(QColor(80, 80, 80), 1, Qt::DotLine));
ui->customPlot->xAxis->grid()->setSubGridVisible(true);
ui->customPlot->yAxis->grid()->setSubGridVisible(true);
ui->customPlot->xAxis->grid()->setZeroLinePen(Qt::NoPen);
ui->customPlot->yAxis->grid()->setZeroLinePen(Qt::NoPen);
ui->customPlot->xAxis->setUpperEnding(QCPLineEnding::esSpikeArrow);
ui->customPlot->yAxis->setUpperEnding(QCPLineEnding::esSpikeArrow);
You should solve both questions with
ui->customPlot->yAxis->setRange(0, 100, Qt::AlignLeft);
EDIT: to show custom text for ticks, you should add this code:
QVector<double> TickValues;
QVector<QString> TickLabels;
// you can safely change the values according to the output
TickValues << 0 << 20 << 40 << 60 << 80 << 100;
TickLabels << "0" << "20%" << "40%" << "60%" << "80%" << "100%";
// disable default ticks and their labels
ui->customPlot->yAxis->setAutoTicks(false);
ui->customPlot->yAxis->setAutoTickLabels(false);
// add your custom values and labels
ui->customPlot->yAxis->setTickVector(TicksValues);
ui->customPlot->yAxis->setTickVectorLabels(TickLabels);
I am trying to obtain an increment that goes up from 0 to n, then decreases from n-1 to 0, and repeats the cycle over and over.
In this example written in Processing, I would like the background to go from black(i=0) to white(i=255) incrementally then white to black incrementally and so forth. Now I only get it to go from black to white, and then it comes back to black suddenly.
int i = 0;
void setup(){
size(640, 360);
frameRate(60);
}
void draw(){
background(i);
i++;
if(i==256){i=0;}
}
Try -
int change = 1;
void draw(){
background(i);
i = i + change;
if(i==256){change = -1;}
if(i==0){change = 1;}
}
Another way to look at this question would be: "How could I draw a triangle wave?".
I like this way cause it does not need "ifs". Some thing like this would do.
triangleWave = maxNumber - abs(incrementedVar % (2*maxNumber) - maxNumber);
Coll, isn't it?
I have this old code using this, it's not drawing the wave, but using it for size and fill color. Also there is a sine wave for comparision. Check it out:
float zigZag, toIncrement, speed =1, maxNumber = 255;
float sine, x = 270, speed2 = 1;
void setup() {
size(800, 400);
background(255);
}
void draw() {
background(255);
//triangle wave
toIncrement+=speed;
zigZag = maxNumber - abs(toIncrement % (2*maxNumber) - maxNumber);
fill(zigZag);
noStroke();
ellipse( 150, height/2+100, 50, 50);
strokeWeight(zigZag);
stroke(0);
line(100, height/2-100, 200, height/2-100);
text("triangle = " + int(zigZag), 100, height-30);
println("triangle wave value = " + zigZag);
//sine wave
x+=speed2;
sine = (1+sin(radians(x)))*(maxNumber/2);
fill(sine);
noStroke();
ellipse( 650, height/2+100, 50, 50);
strokeWeight(sine);
stroke(0);
line(600, height/2-100, 700, height/2-100);
fill(80);
text("sine = " + int(sine), 600, height-30);
}
I am using cvPutText in a loop and it is working fine (printing some data connected to different contours near each contour). the thing is, when trying to use another cvPutText in or before the loop (for example, printing frame number in the upper left corner of the image) only the first cvPutText is executed and printed. the second is ignored.
the code looks like that:
char text[80];
in every loop:
char nam[] = "id : ";
char na[] = " area : ";
char ka[] = "\n cNr : ";
sprintf(text,"%s%d%s%d%s%d", nam, (*obListIter)->id, ka, contNumber, na ,area);
CvFont font;
cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, 0.4, 0.4, 0, 1, 8);
cvPutText(cv_obj_rgb, text, cvPoint(boxPoints[4].x, boxPoints[4].y), &font, cvScalar(255, 255, 255, 0));
the other cvPutText looks exactly the same, only with a different font (font2), chars and text.
Can anybody help? I wasted already a couple of days because of that and I really need this feature to analyse the performance of my kalman filter and to finally finish my bachelor thesis.
You should only use cvInitFont once, don't repeat it in a loop.
CvFont font1;
cvInitFont(&font1, CV_FONT_HERSHEY_SIMPLEX, 0.4, 0.4, 0, 1, 8);
CvFont font2;
cvInitFont(&font2, CV_FONT_HERSHEY_SIMPLEX, 0.4, 0.4, 0, 1, 8);
loop:
char nam[] = "id : ";
char na[] = " area : ";
char ka[] = "\n cNr : ";
sprintf(text,"%s%d%s%d%s%d", nam, (*obListIter)->id, ka, contNumber, na ,area);
cvPutText(cv_obj_rgb, text, cvPoint(boxPoints[4].x, boxPoints[4].y), &font, cvScalar(255, 255, 255, 0));
What am I doing wrong here?
vector <vector<Point> > contourElement;
for (int counter = 0; counter < contours -> size (); counter ++)
{
contourElement.push_back (contours -> at (counter));
const Point *elementPoints [1] = {contourElement.at (0)};
int numberOfPoints [] = {contourElement.at (0).size ()};
fillPoly (contourMask, elementPoints, numberOfPoints, 1, Scalar (0, 0, 0), 8);
I keep getting an error on the const Point part. The compiler says
error: cannot convert 'std::vector<cv::Point_<int>, std::allocator<cv::Point_<int> > >' to 'const cv::Point*' in initialization
What am I doing wrong? (PS: Obviously ignore the missing bracket at the end of the for loop due to this being only part of my code)
Just for the record (and because the opencv docu is very sparse here) a more reduced snippet using the c++ API:
std::vector<cv::Point> fillContSingle;
[...]
//add all points of the contour to the vector
fillContSingle.push_back(cv::Point(x_coord,y_coord));
[...]
std::vector<std::vector<cv::Point> > fillContAll;
//fill the single contour
//(one could add multiple other similar contours to the vector)
fillContAll.push_back(fillContSingle);
cv::fillPoly( image, fillContAll, cv::Scalar(128));
Let's analyse the offending line:
const Point *elementPoints [1] = { contourElement.at(0) };
You declared contourElement as vector <vector<Point> >, which means that contourElement.at(0) returns a vector<Point> and not a const cv::Point*. So that's the first error.
In the end, you need to do something like:
vector<Point> tmp = contourElement.at(0);
const Point* elementPoints[1] = { &tmp[0] };
int numberOfPoints = (int)tmp.size();
Later, call it as:
fillPoly (contourMask, elementPoints, &numberOfPoints, 1, Scalar (0, 0, 0), 8);
contourElement is vector of vector<Point> and not Point :)
so instead of:
const Point *elementPoints
put
const vector<Point> *elementPoints
Some people may arrive here due to an apparently bug in the samples/cpp/create_mask.cpp from the OpenCV. This way, considering the above explained I edited the "if (event == EVENT_RBUTTONUP)" branch para to:
...
mask = Mat::zeros(src.size(), CV_8UC1);
vector<Point> tmp = pts;
const Point* elementPoints[1] = { &tmp[0] };
int npts = (int) pts.size();
cout << "elementsPoints=" << elementPoints << endl;
fillPoly(mask, elementPoints, &npts, 1, Scalar(255, 255, 255), 8);
bitwise_and(src, src, final, mask);
...
Hope it may help someone.