I am trying to come up with a "minimal" way of running a graph slam application using MRPT. The sensor data (LaserScan / Odometry) will be provided by a custom middleware similiar to ROS. After reading docs and source codes (both for the MRPT and the ROS bridge) extensively, I came up with the following snippet:
std::string config_file = "../../../laser_odometry.ini";
std::string rawlog_fname = "";
std::string fname_GT = "";
auto node_reg = mrpt::graphslam::deciders::CICPCriteriaNRD<mrpt::graphs::CNetworkOfPoses2DInf>{};
auto edge_reg = mrpt::graphslam::deciders::CICPCriteriaERD<mrpt::graphs::CNetworkOfPoses2DInf>{};
auto optimizer = mrpt::graphslam::optimizers::CLevMarqGSO<mrpt::graphs::CNetworkOfPoses2DInf>{};
auto win3d = mrpt::gui::CDisplayWindow3D{"Slam", 800, 600};
auto win_observer = mrpt::graphslam::CWindowObserver{};
auto win_manager = mrpt::graphslam::CWindowManager{&win3d, &win_observer};
auto engine = mrpt::graphslam::CGraphSlamEngine<mrpt::graphs::CNetworkOfPoses2DInf>{
config_file, rawlog_fname, fname_GT, &win_manager, &node_reg, &edge_reg, &optimizer};
for (size_t measurement_count = 0;;) {
// grab laser scan from the network, then fill it (hardcoded values for now), e.g:
auto scan_ptr = mrpt::obs::CObservation2DRangeScan::Create();
scan_ptr->timestamp = std::chrono::system_clock::now().time_since_epoch().count();
scan_ptr->rightToLeft = true;
scan_ptr->sensorLabel = "";
scan_ptr->aperture = 3.14; // rad (max-min)
scan_ptr->maxRange = 3.0; // m
scan_ptr->sensorPose = mrpt::poses::CPose3D{};
scan_ptr->resizeScan(30);
for (int i = 0; i < 30; ++i) {
scan_ptr->setScanRange(i, 0.5);
scan_ptr->setScanRangeValidity(i, true);
}
{ // Send LaserScan measurement to the slam engine
auto obs_ptr = std::dynamic_pointer_cast<mrpt::obs::CObservation>(scan_ptr);
engine.execGraphSlamStep(obs_ptr, measurement_count);
++measurement_count;
}
// grab odometry from the network, then fill it (hardcoded values for now), e.g:
auto odometry_ptr = mrpt::obs::CObservationOdometry::Create();
odometry_ptr->timestamp = std::chrono::system_clock::now().time_since_epoch().count();
odometry_ptr->hasVelocities = false;
odometry_ptr->odometry.x(0);
odometry_ptr->odometry.y(0);
odometry_ptr->odometry.phi(0);
{ // Send Odometry measurement to the slam engine
auto obs_ptr = std::dynamic_pointer_cast<mrpt::obs::CObservation>(odometry_ptr);
engine.execGraphSlamStep(obs_ptr, measurement_count);
++measurement_count;
}
// Get pose estimation from the engine
auto pose = engine.getCurrentRobotPosEstimation();
}
Am I in the right direction here? Did I miss something?
Hmm, at a first look the script seems fine, you are providing odometry and the laser scan in two different steps and in Observation form.
Minor note
auto node_reg = mrpt::graphslam::deciders::CICPCriteriaNRD{};
If you want to run with Odometry + laser scans use CFixedIntervalsNRD instead. It's much better tested and actually makes use of those measurements.
There is no minimal graphslam-engine example at present in MRPT but here's here's the main method for running graph-slam with datasets:
https://github.com/MRPT/mrpt/blob/26ee0f2d3a9366c50faa5f78d0388476ae886808/libs/graphslam/include/mrpt/graphslam/apps_related/CGraphSlamHandler_impl.h#L395
template <class GRAPH_T>
void CGraphSlamHandler<GRAPH_T>::execute()
{
using namespace mrpt::obs;
ASSERTDEB_(m_engine);
// Variables initialization
mrpt::io::CFileGZInputStream rawlog_stream(m_rawlog_fname);
CActionCollection::Ptr action;
CSensoryFrame::Ptr observations;
CObservation::Ptr observation;
size_t curr_rawlog_entry;
auto arch = mrpt::serialization::archiveFrom(rawlog_stream);
// Read the dataset and pass the measurements to CGraphSlamEngine
bool cont_exec = true;
while (CRawlog::getActionObservationPairOrObservation(
arch, action, observations, observation, curr_rawlog_entry) &&
cont_exec)
{
// actual call to the graphSLAM execution method
// Exit if user pressed C-c
cont_exec = m_engine->_execGraphSlamStep(
action, observations, observation, curr_rawlog_entry);
}
m_logger->logFmt(mrpt::system::LVL_WARN, "Finished graphslam execution.");
}
You basically grab the data and then continuously feed them to CGraphSlamEngine via either execGraphSlamStep or _execGraphSlamStep methods.
Here's also the relevant snippet for processing measurements in the corresponding ROS wrapper that operates with measurements from ROS topics:
https://github.com/mrpt-ros-pkg/mrpt_slam/blob/8b32136e2a381b1759eb12458b4adba65e2335da/mrpt_graphslam_2d/include/mrpt_graphslam_2d/CGraphSlamHandler_ROS_impl.h#L719
template<class GRAPH_T>
void CGraphSlamHandler_ROS<GRAPH_T>::processObservation(
mrpt::obs::CObservation::Ptr& observ) {
this->_process(observ);
}
template<class GRAPH_T>
void CGraphSlamHandler_ROS<GRAPH_T>::_process(
mrpt::obs::CObservation::Ptr& observ) {
using namespace mrpt::utils;
if (!this->m_engine->isPaused()) {
this->m_engine->execGraphSlamStep(observ, m_measurement_cnt);
m_measurement_cnt++;
}
}
Related
We're developing a high frequency trading platform with C++ and we've tried implementing grpc with protobuf but we saw that a single network call tooks approximately 200-300 microseconds which is too long for us. What we are expecting to have as serializing/deserializing data through network socket is approximately 50-60 microseconds.
Than we 've tried to use protobuf with native c++ sockets (with using non blocking i/o), we saw that this time performance became approximately 150-200 microseconds which was not enough for us. Than we saw flatbuffers and implemented it as described in below. However during our tests we saw that only serializing (also same in deserializing) tooks approximately 50 microseconds and also transferring the data tooks 30-40 microseconds so totatly it tooks approximately 100-150 microseconds. So I wondered if we are doing something wrong in our implementation of flatbuffers.
In the below example, I've calculated the difference betwen timestamp logs are :
Timestamp 1 -> Timestamp 2 = 16 microseconds
Timestamp 2 -> Timestamp 3 = 24 microseconds
Total serialization = 40 microseconds
Do you know any other way to increase the performance
Example code for serializing data with flatbuffers in C++:
const char* MAHelper::getRequest(BaseRequest *request,int& size) {
const char *result;
flatbuffers::FlatBufferBuilder builder(10240);
if (request->orderType == OrderTypes::TYPE_LoginRequest){
std::cout<<"Timestamp 1: "<<getCurrentTimestamp()<<std::endl;
LoginRequest *loginRequest = (LoginRequest*) request;
std::cout<<"Converting Login Request 1: "<<getCurrentTimestamp()<<std::endl;
auto username = builder.CreateString(loginRequest->userName);
auto password = builder.CreateString(loginRequest->password);
auto application = getApplication(loginRequest->applicationType);
std::cout<<"Timestamp 2: "<<getCurrentTimestamp()<<std::endl;
auto loginReq = piramit::orders::fb::CreateLoginRequest(builder,username,password,application);
auto loginOrderBase = piramit::orders::fb::CreateRequestHolder(builder,piramit::orders::fb::BaseRequest_LoginRequest,loginReq.Union());
builder.Finish(loginOrderBase);
std::cout<<"Timestamp 3:"<<getCurrentTimestamp()<<std::endl;
} else if (request->orderType == OrderTypes::TYPE_EnterOrderRequest) {
EnterOrderRequest *enterOrderRequest = (EnterOrderRequest*) request;
auto strategyIdentifier = builder.CreateString(enterOrderRequest->strategyIdentifier);
auto passThrough = builder.CreateString(enterOrderRequest->passThrough);
auto account = builder.CreateString(enterOrderRequest->account);
auto authToken = builder.CreateString(enterOrderRequest->baseRequest.authToken);
auto enterOrderReq = piramit::orders::fb::CreateEnterOrder(builder,enterOrderRequest->orderbookId,enterOrderRequest->quantity,enterOrderRequest->price,account,
getStrategyType(enterOrderRequest->strategyType),strategyIdentifier,getSide(enterOrderRequest->side),getTimeInForce(enterOrderRequest->timeInForce),passThrough,getOrderType(enterOrderRequest->orderType));
auto enterOrderBase = piramit::orders::fb::CreateRequestHolder(builder,piramit::orders::fb::BaseRequest_EnterOrder,enterOrderReq.Union(),authToken);
builder.Finish(enterOrderBase);
} else if (request->orderType == OrderTypes::TYPE_ReplaceOrderRequest) {
ReplaceOrderRequest *replaceOrderRequest = (ReplaceOrderRequest*) request;
auto orderToken = builder.CreateString(replaceOrderRequest->orderToken);
auto authToken = builder.CreateString(replaceOrderRequest->baseRequest.authToken);
auto replaceOrderReq = piramit::orders::fb::CreateReplaceOrder(builder,orderToken,replaceOrderRequest->quantity,replaceOrderRequest->price);
auto replaceOrderBase = piramit::orders::fb::CreateRequestHolder(builder,piramit::orders::fb::BaseRequest_ReplaceOrder,replaceOrderReq.Union(),authToken);
builder.Finish(replaceOrderBase);
} else if (request->orderType == OrderTypes::TYPE_CancelOrderRequest) {
CancelOrderRequest *cancelOrderRequest = (CancelOrderRequest*) request;
auto orderToken = builder.CreateString(cancelOrderRequest->orderToken);
auto authToken = builder.CreateString(cancelOrderRequest->baseRequest.authToken);
auto cancelOrderReq = piramit::orders::fb::CreateCancelOrder(builder,orderToken);
auto cancelOrderBase = piramit::orders::fb::CreateRequestHolder(builder,piramit::orders::fb::BaseRequest_CancelOrder,cancelOrderReq.Union(),authToken);
builder.Finish(cancelOrderBase);
} else if (request->orderType == OrderTypes::TYPE_BasicOrderRequest) {
BasicOrderRequest *basicOrderRequest = (BasicOrderRequest*) request;
auto authToken = builder.CreateString(basicOrderRequest->baseRequest.authToken);
auto basicOrderReq = piramit::orders::fb::CreateOrderRequest(builder,getOperationType(basicOrderRequest->operation),basicOrderRequest->orderId,getOrderType(basicOrderRequest->orderTypes));
auto basicOrderBase = piramit::orders::fb::CreateRequestHolder(builder,piramit::orders::fb::BaseRequest_OrderRequest,basicOrderReq.Union(),authToken);
builder.Finish(basicOrderBase);
} else if (request->orderType == OrderTypes::TYPE_AccountStrategyRequest) {
AccountStrategyRequest *accountStrategyRequest = (AccountStrategyRequest*) request;
flatbuffers::Offset<flatbuffers::String> account = 0;
flatbuffers::Offset<flatbuffers::String> strategyIdentifier = 0;
auto authToken = builder.CreateString(accountStrategyRequest->baseRequest.authToken);
if (accountStrategyRequest->operation == OPERATION_SET) {
account = builder.CreateString(accountStrategyRequest->accountStrategy.account);
strategyIdentifier = builder.CreateString(accountStrategyRequest->accountStrategy.strategyIdentifier);
}
flatbuffers::Offset<piramit::orders::fb::AccountStrategy> accountStrategy = piramit::orders::fb::CreateAccountStrategy(builder,accountStrategyRequest->accountStrategy.orderBookId,account,getStrategyType(accountStrategyRequest->accountStrategy.strategyType),strategyIdentifier);
auto accountStrategyReq = piramit::orders::fb::CreateAccountStrategyRequest(builder,getOperationType(accountStrategyRequest->operation),accountStrategy);
auto accountStrategyBase = piramit::orders::fb::CreateRequestHolder(builder,piramit::orders::fb::BaseRequest_AccountStrategyRequest,accountStrategyReq.Union(),authToken);
builder.Finish(accountStrategyBase);
} else if (request->orderType == OrderTypes::TYPE_OrderBookStateRequest) {
OrderBookStateRequest *orderBookStateRequest = (OrderBookStateRequest*) request;
auto stateName = builder.CreateString(orderBookStateRequest->stateName);
auto orderBookStateReq = piramit::orders::fb::CreateOrderBookStateRequest(builder,stateName,orderBookStateRequest->orderBookId,orderBookStateRequest->timestamp);
auto orderBookStateBase = piramit::orders::fb::CreateRequestHolder(builder,piramit::orders::fb::BaseRequest_OrderBookStateRequest,orderBookStateReq.Union());
builder.Finish(orderBookStateBase);
}
uint8_t *requestBuffer = builder.GetBufferPointer();
result = (const char*) requestBuffer;
size = builder.GetSize();
return result;
}
And also this is part of our schema in flatbuffers
union BaseRequest { LoginRequest,EnterOrder,CancelOrder,ReplaceOrder,OrderRequest,AccountStrategyRequest,OrderBookStateRequest }
table RequestHolder {
request:BaseRequest;
authToken:string;
}
table LoginRequest {
username:string;
password:string;
application:Application = APP_UNKNOWN;
}
table EnterOrder{
order_book_id:uint;
quantity:ulong;
price:int;
account:string;
strategy:StrategyType;
strategy_identifier:string;
side:Side;
time_in_force:TimeInForce;
pass_through:string;
order_type:OrderType;
}
root_type RequestHolder;
For serializing:
You can save yourself some time by reusing the FlatBufferBuilder accross, just call Reset() on it to clear.
You are doing HFT in C++, yet a lot of your data consists of strings? FlatBuffers has all sorts of really efficient ways of representing data, with scalars, structs and enums. Try to find better representations of your data if speed really matters.
For deserializing:
Deserializing in FlatBuffers costs 0ms, since there is no need to do anything. You can access in place. If what you're doing is copying all incoming FlatBuffers data into your own data structures, you are throwing away one of FlatBuffers biggest advantages. Instead, make the code acting on the incoming data work directly with the incoming FlatBuffer.
I have an 32-bit ARM Cortex M4 (the processor in Pixhawk) to write two classes, each one is one threading in Pixhawk codebase setting.
The first one is LidarScanner, which dealing with incoming serial data and generates "obstacle situation". The second one is Algorithm, which handle "obstacle situation" and take some planning strategy. Here are my solution right now, use the reference function LidarScanner::updateObstacle(uint8_t (&array)[181]) to update "obstacle situation" which is 181 size array.
LidarScanner.cpp:
class LidarScanner{
private:
struct{
bool available = false;
int AngleArr[181];
int RangeArr[181];
bool isObstacle[181] = {}; //1: unsafe; 0:safe;
}scan;
......
public:
LidarScanner();
//main function
void update()
{
while(hal.uartE->available()) //incoming serial data is available
{
decode_data(); //decode serial data into three kind data: Range, Angle and Period_flag
if(complete_scan()) //determine if the lidarscanner one period is completed
{
scan.available = false;
checkObstacle(); //check obstacle situation and store safety in isObstacle[181]
scan.available = true;
}
}
}
//for another API recall
void updateObstacle(uint8_t (&array)[181])
{
for(int i=0; i<=181; i++)
{
array[i]=scan.isObstacle[i];
}
}
//for another API recall
bool ScanAvailable() const { return scan.available; }
......
}
Algorithm.cpp:
class Algorithm{
private:
uint8_t Obatcle_Value[181] = {};
class LidarScanner& _lidarscanner;
......
public:
Algorithm(class LidarScanner& _lidarscanner);
//main funcation
void update()
{
if (hal.uartE->available() && _lidarscanner.ScanAvailable())
{
//Update obstacle situation into Algorithm phase and do more planning strategy
_lidarscanner.updateObstacle(Obatcle_Value);
}
}
......
}`
Usually, it works fine. But I want to improve the performances so that I want to know what's the most effective way to do that. thanks!!!!
The most efficient way to copy data is to use the DMA.
DMAx_Channelx->CNDTR = size;
DMAx_Channelx->CPAR = (uint32_t)&source;
DMAx_Channelx->CMAR = (uint32_t)&destination;
DMAx_Channelx->CCR = (0<<DMA_CCR_MSIZE_Pos) | (0<<DMA_CCR_PSIZE_Pos)
| DMA_CCR_MINC | DMA_CCR_PINC | DMA_CCR_MEM2MEM ;
while(!(DMAx->ISR & DMA_ISR_TCIFx ));
AN4031 Using the DMA controller.
I learning how to simulate digital logic circuits .
I am presenting the source code of my first attempt here.
It is small program for simulating circuits consisting
of AND,OR and NOT gates.
This code works well for circuits without loops.
When circuit loops are introduced it causes a stack overflow because of endless recursion.
Please help me to remove this bug.
Please note that this is a hobby project and any help will be appreciated.
Source code :
#include <cstdlib>
#include <iostream>
using namespace std;
class LogicGate
{
int type;//gate type: 0 for NOT, 1 for OR, 2 for AND
//pins
bool ina;//input a
bool inb;//input b::this pin is not used for NOT gate
bool outc;//output
//fan-in
LogicGate* ga;//gate connected to input a
LogicGate* gb;//gate connected to input b
//fan-out
LogicGate* gc;//gate connected to output
int gctarget;//target input to which the gate "gc" is connected, 0 for input a, 1 for input c
public:
char* name;
LogicGate()
{
ina = inb = outc = false;
ga = gb = gc = (LogicGate*)0;
type = 0;
}
LogicGate(bool a, bool b)
{
ina = a; inb = b; outc = false;
ga = gb = gc = (LogicGate*)0;
type = 0;
}
//set logic
void settype(int t){if(t>=0&&t<3)type=t;else type=0;}
//set input
void seta(bool a){ ina = a; }
void setb(bool b){ inb = b; }
void setab(bool a, bool b){ina = a; inb = b; }
//connect gate
void cona(LogicGate* cga){ ga = cga; }
void conb(LogicGate* cgb){ gb = cgb; }
void conab(LogicGate* cga, LogicGate* cgb){ ga = cga; gb = cgb; }
//connect the output of this gate to another gate's input
void chainc(LogicGate* cgc, int target)
{
gc = cgc;
gctarget = target;
if(target==0) cgc->cona(this); else cgc->conb(this);
}
//calculate output
bool calcout()
{
//if the input is not available make it available by forcing the connected gates to calculate
if(ga){ ina = ga->calcout(); } //BUG:this may cause Stack overflow for circuits with loops
if(gb){ inb = gb->calcout(); }//BUG:this may cause Stack overflow for circuits with loops
//do the logic when inputs are available
switch(type)
{
case 0:
outc = !ina; break;
case 1:
outc = ina || inb; break;
case 2:
outc = ina && inb; break;
}
//if a gate is connected to output pin transfer the output value to the target input pin of the gate
if(gc){
if(gctarget==0){
gc->seta(outc);
}else{
gc->setb(outc);
}
}
//for debugging only
cout<<name<<" outputs "<<outc<<endl;
return outc;
}
};
int main(int argc, char *argv[])
{
LogicGate x,z;
//AND gate
z.settype(2);
z.seta(false);
z.setb(true);
z.name = "ZZZ";
//OR gate
x.settype(1);
x.cona(&z); // take one input from AND gate's output
x.setb(true);
x.name = "XXX";
//z.cona(&x);// take one input from OR gate's output to make a loop:: results in stack overflow
x.chainc(&z,0);//connect the output to AND gate's input "a" to form loop:: results in stack overflow
cout<<"final output"<<x.calcout()<<endl;
return 0;
}
The Problem here is that you are Looping infinitely. A program behaves somehow different than real logic gates. I see two possibilities here:
1) Implement cycles
You can implement it like a cpu works. a call to calcout only calculates to Output of one gate and iterates to the next one. You could create a Container class for your gates:
class GateContainer
{
//Contains all gates of your "circuit"
std::vector<LogicalGate> allGates;
//Contains all current gates to be processed
std::queue<LogicalGate*> currentGates;
void nextStep();
}
The nextStep function could look like this:
void GateContainer::nextStep()
{
//Get first gate from queue
LogicalGate *current = currentGates.front();
currentGates.pop();
//Do all the calculations with th current gate
//Push the gate connected to the Output to the list
currentGates.push(current->gc);
}
Please not that this code is untested and may also Need some error checks
2) Try to catch Loops
You can also try to catch Loops in calcout. You could achieve this by creating a flag in LogicalGate and reset it every time before calling calcout:
class LogicalGate
{
...
bool calculated;
...
}
Now before calling calcout() You Need to set calculated to false for every gate. Then, calcout could look something like this:
bool calcout()
{
calculated = true;
if(ga && !ga->calculated){ ina = ga->calcout(); }
if(gb && !ga->calculated){ inb = gb->calcout(); }
...
}
I want to ask you guys to help me with creating keyframes in Max SDK C++.
What I've done:
Created a Controller Plugin
Inside the getValue function I've done my translations via code.
I also wrote the setValue function.
Which I think manages keyframes and stores the controllers position in a given time in a given keyframe. In this way I achieved to be able to set keys manually, but I really would like, to work with the Auto Key turned on in Max.
On the other hand, I can't see the freshly added keys values. So please help me, how could I add keyframes?
Many Thanks:
Banderas
void maxProject3::GetValue(TimeValue t, void *ptr, Interval &valid, GetSetMethod method)
{
Point3 p3OurAbsValue(0, 0, 0);
tomb[0]=0;
//These positions stores my data they are globals
XPosition += (accX);
YPosition += (accY);
ZPosition += (accZ);
p3OurAbsValue.x = XPosition;
p3OurAbsValue.y = YPosition;
p3OurAbsValue.z = ZPosition;
valid.Set(t,t+1); //This answer is only valid at the calling time.
MatrixCtrl->GetValue(t, &p3OurAbsValue.y, valid, CTRL_RELATIVE);
if (method == CTRL_ABSOLUTE)
{
Point3* p3InVal = (Point3*)ptr;
*p3InVal = p3OurAbsValue;
}
else // CTRL_RELATIVE
{
//We do our translations on a Matrix
Matrix3* m3InVal = (Matrix3*)ptr;
//m3InVal->PreTranslate(p3OurAbsValue);
m3InVal->PreRotateX(rotX);
m3InVal->PreRotateY(rotY);
m3InVal->PreRotateZ(rotZ);
}
}
int maxProject3::NumSubs() {
return 1;
}
Animatable* maxProject3::SubAnim(int n) {
return MatrixCtrl;
}
void maxProject3::SetValue(TimeValue t, void *ptr, int commit, GetSetMethod method)
{
Matrix3* m3InVal = (Matrix3*)ptr;
MatrixCtrl->AddNewKey(t, ADDKEY_SELECT);
MatrixCtrl->SetValue(t, &m3InVal, commit, CTRL_RELATIVE);
}
To turn on the Auto key mode try using AnimateOn() before your transformation. Also add AnimateOff() to turn off the auto key mode in the end.
I did it in one of my project to create material id animation using auto key mode.
/** Auto key on*/
AnimateOn();
/** Creating material id animation */
for(int mtl_id = 1; mtl_id <= num_sub_mtl; ++mtl_id, time += time_step)
{
mtl_modifier->GetParamBlock()->SetValue(MATMOD_MATID,time,mtl_id);
}
/** Auto key off*/
AnimateOff();
Also as a suggestion, use the max script listener to know whats happening when the animation is created using 3ds Max GUI. This will help you to recreate the animation using Max SDK.
I am using libsvm version 3.16. I have done some training in Matlab, and created a model. Now I would like to save this model to disk and load this model in my C++ program. So far I have found the following alternatives:
This answer explains how to save a model from C++, which is based on this website. Not exactly what I need, but could be adapted. (This requires development time).
I could find the best training parameters (kernel,C) in Matlab and re-train everything in C++. (Will require doing the training in C++ each time I change a parameter. It's not scalable).
Thus, both of these options are not satisfactory,
Does anyone have an idea?
My solution was to retrain in C++ because I couldn't find a nice way to directly save the model. Here's my code. You'll need to adapt it and clean it up a bit. The biggest change you'll have to make it not hard coding the svm_parameter values like I did. You'll also have to replace FilePath with std::string. I'm copying, pasting and making small edits here in SO so the formatting won't e perfect:
Used like this:
auto targetsPath = FilePath("targets.txt");
auto observationsPath = FilePath("observations.txt");
auto targetsMat = MatlabMatrixFileReader::Read(targetsPath, ',');
auto observationsMat = MatlabMatrixFileReader::Read(observationsPath, ',');
auto v = MiscVector::ConvertVecOfVecToVec(targetsMat);
auto model = SupportVectorRegressionModel{ observationsMat, v };
std::vector<double> observation{ { // 32 feature observation
0.883575729725847,0.919446119013878,0.95359403450317,
0.968233630936732,0.91891307107125,0.887897763183844,
0.937588566544751,0.920582702918882,0.888864454119387,
0.890066735260163,0.87911085669864,0.903745573664995,
0.861069296586979,0.838606194934074,0.856376230548304,
0.863011311537075,0.807688936997926,0.740434984165146,
0.738498042748759,0.736410940165691,0.697228384912424,
0.608527698289016,0.632994967880269,0.66935784966765,
0.647761430696238,0.745961037635717,0.560761134660957,
0.545498063585615,0.590854855113663,0.486827902942118,
0.187128866890822,- 0.0746523069562551
} };
double prediction = model.Predict(observation);
miscvector.h
static vector<double> ConvertVecOfVecToVec(const vector<vector<double>> &mat)
{
vector<double> targetsVec;
targetsVec.reserve(mat.size());
for (size_t i = 0; i < mat.size(); i++)
{
targetsVec.push_back(mat[i][0]);
}
return targetsVec;
}
libsvmtargetobjectconvertor.h
#pragma once
#include "machinelearning.h"
struct svm_node;
class LibSvmTargetObservationConvertor
{
public:
svm_node ** LibSvmTargetObservationConvertor::ConvertObservations(const vector<MlObservation> &observations, size_t numFeatures) const
{
svm_node **svmObservations = (svm_node **)malloc(sizeof(svm_node *) * observations.size());
for (size_t rowI = 0; rowI < observations.size(); rowI++)
{
svm_node *row = (svm_node *)malloc(sizeof(svm_node) * numFeatures);
for (size_t colI = 0; colI < numFeatures; colI++)
{
row[colI].index = colI;
row[colI].value = observations[rowI][colI];
}
row[numFeatures].index = -1; // apparently needed
svmObservations[rowI] = row;
}
return svmObservations;
}
svm_node* LibSvmTargetObservationConvertor::ConvertMatToSvmNode(const MlObservation &observation) const
{
size_t numFeatures = observation.size();
svm_node *obsNode = (svm_node *)malloc(sizeof(svm_node) * numFeatures);
for (size_t rowI = 0; rowI < numFeatures; rowI++)
{
obsNode[rowI].index = rowI;
obsNode[rowI].value = observation[rowI];
}
obsNode[numFeatures].index = -1; // apparently needed
return obsNode;
}
};
machinelearning.h
#pragma once
#include <vector>
using std::vector;
using MlObservation = vector<double>;
using MlTarget = double;
//machinelearningmodel.h
#pragma once
#include <vector>
#include "machinelearning.h"
class MachineLearningModel
{
public:
virtual ~MachineLearningModel() {}
virtual double Predict(const MlObservation &observation) const = 0;
};
matlabmatrixfilereader.h
#pragma once
#include <vector>
using std::vector;
class FilePath;
// Matrix created with command:
// dlmwrite('my_matrix.txt', somematrix, 'delimiter', ',', 'precision', 15);
// In these files, each row is a matrix row. Commas separate elements on a row.
// There is no space at the end of a row. There is a blank line at the bottom of the file.
// File format:
// 0.4,0.7,0.8
// 0.9,0.3,0.5
// etc.
static class MatlabMatrixFileReader
{
public:
static vector<vector<double>> Read(const FilePath &asciiFilePath, char delimiter)
{
vector<vector<double>> values;
vector<double> valueline;
std::ifstream fin(asciiFilePath.Path());
string item, line;
while (getline(fin, line))
{
std::istringstream in(line);
while (getline(in, item, delimiter))
{
valueline.push_back(atof(item.c_str()));
}
values.push_back(valueline);
valueline.clear();
}
fin.close();
return values;
}
};
supportvectorregressionmodel.h
#pragma once
#include <vector>
using std::vector;
#include "machinelearningmodel.h"
#include "svm.h" // libsvm
class FilePath;
class SupportVectorRegressionModel : public MachineLearningModel
{
public:
SupportVectorRegressionModel::~SupportVectorRegressionModel()
{
svm_free_model_content(model_);
svm_destroy_param(¶m_);
svm_free_and_destroy_model(&model_);
}
SupportVectorRegressionModel::SupportVectorRegressionModel(const vector<MlObservation>& observations, const vector<MlTarget>& targets)
{
// assumes all observations have same number of features
size_t numFeatures = observations[0].size();
//setup targets
//auto v = ConvertVecOfVecToVec(targetsMat);
double *targetsPtr = const_cast<double *>(&targets[0]); // why aren't the targets const?
LibSvmTargetObservationConvertor conv;
svm_node **observationsPtr = conv.ConvertObservations(observations, numFeatures);
// setup observations
//svm_node **observations = BuildObservations(observationsMat, numFeatures);
// setup problem
svm_problem problem;
problem.l = targets.size();
problem.y = targetsPtr;
problem.x = observationsPtr;
// specific to out training sets
// TODO: This is hard coded.
// Bust out these values for use in constructor
param_.C = 0.4; // cost
param_.svm_type = 4; // SVR
param_.kernel_type = 2; // radial
param_.nu = 0.6; // SVR nu
// These values are the defaults used in the Matlab version
// as found in svm_model_matlab.c
param_.gamma = 1.0 / (double)numFeatures;
param_.coef0 = 0;
param_.cache_size = 100; // in MB
param_.shrinking = 1;
param_.probability = 0;
param_.degree = 3;
param_.eps = 1e-3;
param_.p = 0.1;
param_.shrinking = 1;
param_.probability = 0;
param_.nr_weight = 0;
param_.weight_label = NULL;
param_.weight = NULL;
// suppress command line output
svm_set_print_string_function([](auto c) {});
model_ = svm_train(&problem, ¶m_);
}
double SupportVectorRegressionModel::Predict(const vector<double>& observation) const
{
LibSvmTargetObservationConvertor conv;
svm_node *obsNode = conv.ConvertMatToSvmNode(observation);
double prediction = svm_predict(model_, obsNode);
return prediction;
}
SupportVectorRegressionModel::SupportVectorRegressionModel(const FilePath & modelFile)
{
model_ = svm_load_model(modelFile.Path().c_str());
}
private:
svm_model *model_;
svm_parameter param_;
};
Option 1 is actually pretty reasonable. If you save the model in libsvm's C format through matlab, then it is straightforward to work with the model in C/C++ using functions provided by libsvm. Trying to work with matlab-formatted data in C++ will probably be much more difficult.
The main function in "svm-predict.c" (located in the root directory of the libsvm package) probably has most of what you need:
if((model=svm_load_model(argv[i+1]))==0)
{
fprintf(stderr,"can't open model file %s\n",argv[i+1]);
exit(1);
}
To predict a label for example x using the model, you can run
int predict_label = svm_predict(model,x);
The trickiest part of this will be to transfer your data into the libsvm format (unless your data is in the libsvm text file format, in which case you can just use the predict function in "svm-predict.c").
A libsvm vector, x, is an array of struct svm_node that represents a sparse array of data. Each svm_node has an index and a value, and the vector must be terminated by an index that is set to -1. For instance, to encode the vector [0,1,0,5], you could do the following:
struct svm_node *x = (struct svm_node *) malloc(3*sizeof(struct svm_node));
x[0].index=2; //NOTE: libsvm indices start at 1
x[0].value=1.0;
x[1].index=4;
x[1].value=5.0;
x[2].index=-1;
For SVM types other than the classifier (C_SVC), look at the predict function in "svm-predict.c".