Locating bug triggering runtime error in Rcpp - c++

I am using Rcpp to translate a model written in C++ to R, to take advantage of the visualization and fitting capabilities. I know the C++ code compiles and runs well. However, in Rcpp it triggers a runtime error that manifests as a fatal error and aborts the R session.
To try to locate where the error is triggered, I have written a wait_for_return() function and placed it along the code. This way I found the function that is triggering the error:
init = 0;
gammas[group[id_data_point]]*
(1 - pow(1 -
rel_abund_resid[id_data_point] -
rel_abund_visitors[id_data_point], 2)) / (1 - gammas[group[id_data_point]]);
draw(clientSet, int(sim_param["totRounds"]),
rel_abund_resid[id_data_point],
rel_abund_visitors[id_data_point]);
cout << id_data_point << '\t'<< group[id_data_point] << '\n';
wait_for_returnRcpp();
cleaners[agent[id_data_point]][group[id_data_point]]->rebirth(init);
wait_for_returnRcpp(); # This function is not reached
idClientSet = 0;
VisPref = 0, countRVopt = 0;
cleaners[agent[id_data_point]][group[id_data_point]]->
act(clientSet, idClientSet, prob_Vis_Leav[id_data_point],
double(sim_param["ResProbLeav"]), double(sim_param["VisReward"]),
sim_param["ResReward"], sim_param["inbr"], sim_param["outbr"],
learnScenario(int(sim_param["scenario"])));
wait_for_returnRcpp();
cleaners[agent[id_data_point]][group[id_data_point]]->update();
return 0;
While trying to understand how the function is triggering the error, I commented some parts of the code. Surprisingly, when I commented a function that is executed after the error-triggering function the error is not triggered anymore.
init = 0;
gammas[group[id_data_point]]*
(1 - pow(1 -
rel_abund_resid[id_data_point] -
rel_abund_visitors[id_data_point], 2)) / (1 - gammas[group[id_data_point]]);
draw(clientSet, int(sim_param["totRounds"]),
rel_abund_resid[id_data_point],
rel_abund_visitors[id_data_point]);
cout << id_data_point << '\t'<< group[id_data_point] << '\n';
wait_for_returnRcpp();
cleaners[agent[id_data_point]][group[id_data_point]]->rebirth(init);
wait_for_returnRcpp();
idClientSet = 0;
VisPref = 0, countRVopt = 0;
// cleaners[agent[id_data_point]][group[id_data_point]]->
// act(clientSet, idClientSet, prob_Vis_Leav[id_data_point],
// double(sim_param["ResProbLeav"]), double(sim_param["VisReward"]),
// sim_param["ResReward"], sim_param["inbr"], sim_param["outbr"],
// learnScenario(int(sim_param["scenario"])));
// wait_for_returnRcpp();
// cleaners[agent[id_data_point]][group[id_data_point]]->update();
return 0;
# Code runs until the end
I have a hard time understanding how this is possible. The code is lengthy, so I was not able to provide a reproducible example. Any ideas of what I am missing are very welcome.

Related

warning: Failed to call `main()` to execute the macro

I am trying to learn ROOT and I have a few codes that I can work with. Sometimes codes work but sometimes they don't.
{
c1 = new TCanvas("c1", "My Root Plots",600, 400);
c1->Divide(2,2);
c1->cd(1);
f=new TF1("f","[0]*exp(-0.5*((x-[1])/[2])**2)/(sqrt(2.0*TMath::Pi())*[2])",-100,100); f->SetTitle("Gaus;X axis ;Y axis");
f->SetParameter(0,0.5*sqrt(2*TMath::Pi()));
f->SetParameter(1,8);
f->SetParameter(2,5);
f->SetLineColor(3);
f->SetMarkerColor(1);
f->SetMarkerStyle(kOpenStar);
f->SetMarkerSize(5);
f->Draw();
c1->cd(2);
f1 = new TF1("f1", "[0]*x+[1]", 0,50);
f1->SetParameters(10,4);
f1->SetLineColor(5);
f1->SetTitle("ax+b;x;y");
f1->Draw();
}
This is the code I am trying to do. Code is kinda working , ''what do you mean kinda working''. I mean it's giving me a graph but as you can see in the code I wrote ( f->SetMarkerColor(1);
f->SetMarkerStyle(kOpenStar);) But markers didn't appear on the graph. Terminal doesn't giving me any errors. Is it my ROOT library missing ? I cannot upload images because I am new here.
I have a another problem. I want to share it maybe it will help solving the problem that I have.
void testRandom(Int_t nrEvents=500000000)
{
TRandom *r1=new TRandom();
TRandom2 *r2=new TRandom2();
TRandom3 *r3=new TRandom3();
TCanvas* c1=new TCanvas("c1","TRandom Number Generators", 800,600); c1->Divide(3,1);
TH1D *h1=new TH1D("h1","TRandom",500,0,1); TH1D *h2=new TH1D("h2","TRandom2",500,0,1); TH1D *h3=new TH1D("h3","TRandom3",500,0,1); TStopwatch *st=new TStopwatch();
st->Start();
for (Int_t i=0; i<nrEvents; i++) { h1->Fill(r1->Uniform(0,1)); } st->Stop(); cout << "Random: " << st->CpuTime() << endl; st->Start();
c1->cd(1); h1->SetFillColor(kRed+1); h1->SetMinimum(0); h1->Draw();
for (Int_t i=0; i<nrEvents; i++) { h2->Fill(r2->Uniform(0,1)); } st->Stop(); cout << "Random2: " << st->CpuTime() << endl; st->Start();
c1->cd(2); h2->SetFillColor(kGreen+1); h2->SetMinimum(0); h2->Draw();
for (Int_t i=0; i<nrEvents; i++) { h3->Fill(r3->Uniform(0,1)); } st->Stop(); cout << "Random3:" << st->CpuTime() << endl;
c1->cd(3);
h3->Draw(); h3->SetFillColor(kBlue+1); h3->SetMinimum(0);
}
This is a another code I am trying to run. But this code doesn't work an it's giving me this error.
warning: Failed to call main() to execute the macro.
Add this function or rename the macro. Falling back to .L.
I tried different things. I tried ,
root [1] .x main.cpp
root [1] .L main.cpp
still giving me same error.
f->SetMarkerColor(1); f->SetMarkerStyle(kOpenStar);) But markers
didn't appear on the graph.
Try f->Draw("PL") instead of f->Draw() to make the markers visible.
warning: Failed to call main() to execute the macro.
Rename your file, it should be called testRandom.cpp instead of main.cpp
Then, you can execute it with .x testRandom.cpp.

Building an OSC message with a for loop in C++/oscpack for lidar

Ok, so I'm decent with many programming languages, C++ isn't one of them. This task would take me 5 minutes in pretty much anything else, but after many tries, I'm hoping for some guidance.
My application is reading data from an RPLidar A3, and packaging up the data into OSC to be forwarded to other applications that work with the data and convert it into trackable blobs as a generic tracking system for people in a space. ( It currently looks like this )
I'm doing this via a TINY modification to their example viewer app using the oscpack library.
The sensor returns 2 values: an angle (Float) and a distance in mm (integer)
The sdk gets all the values from a spin and then you iterate through them with a loop. I have it working fine where it sends the value pairs out as it gets them "ie lidar1/ angle distance" out a UDP socket. It looks like this:
UdpTransmitSocket transmitSocket(IpEndpointName(ADDRESS, PORT));
char lidbuffer[OUTPUT_BUFFER_SIZE];
for (int pos = 0; pos < (int)count; ++pos) {
scanDot dot;
if (!buffer[pos].dist_mm_q2) continue;
dot.quality = buffer[pos].quality;
dot.angle = buffer[pos].angle_z_q14 * 90.f / 16384.f;
dot.dist = buffer[pos].dist_mm_q2 /4.0f;
_scan_data.push_back(dot);
osc::OutboundPacketStream p(lidbuffer, OUTPUT_BUFFER_SIZE);
p << osc::BeginMessage(SENDPREFIX)
<< dot.angle << (int)dot.dist << osc::EndMessage;
transmitSocket.Send(p.Data(), p.Size());
}
It works, but as I've move to a higher resolution sensor, I'm having issues with dropped data, so I would like to build the data into a single message or a bundle. As a reference, each revolution of the sensor is about 800 angle/distance pairs.
This is what I thought would work, based on the oscpack example It compiles, but crashes on opening:
As a big message:
UdpTransmitSocket transmitSocket(IpEndpointName(ADDRESS, PORT));
char lidbuffer[OUTPUT_BUFFER_SIZE];
//make a new message
osc::OutboundPacketStream p(lidbuffer, OUTPUT_BUFFER_SIZE);
p << osc::BeginMessage(SENDPREFIX);
for (int pos = 0; pos < (int)count; ++pos) {
scanDot dot;
if (!buffer[pos].dist_mm_q2) continue;
dot.quality = buffer[pos].quality;
dot.angle = buffer[pos].angle_z_q14 * 90.f / 16384.f;
dot.dist = buffer[pos].dist_mm_q2 /4.0f;
_scan_data.push_back(dot);
//add more data into the message
p << dot.angle << (int)dot.dist;
}
//send the message
p << osc::EndMessage;
transmitSocket.Send(p.Data(), p.Size());
As a bundle
UdpTransmitSocket transmitSocket(IpEndpointName(ADDRESS, PORT));
char lidbuffer[OUTPUT_BUFFER_SIZE];
osc::OutboundPacketStream p(lidbuffer, OUTPUT_BUFFER_SIZE);
//make a bundle
p << osc::BeginBundleImmediate;
for (int pos = 0; pos < (int)count; ++pos) {
scanDot dot;
if (!buffer[pos].dist_mm_q2) continue;
dot.quality = buffer[pos].quality;
dot.angle = buffer[pos].angle_z_q14 * 90.f / 16384.f;
dot.dist = buffer[pos].dist_mm_q2 /4.0f;
_scan_data.push_back(dot);
//add a message to the bundle
p << osc::BeginMessage(SENDPREFIX) << dot.angle << (int)dot.dist << osc::EndMessage;;
}
//send the bundle
p << osc::EndBundle;
transmitSocket.Send(p.Data(), p.Size());
Both compile but crash.
I've read several things, but << assignment methods aren't getting through and I'm guessing it is something really simple.
Caveats:
I know people have written wrappers for this driver code in many other languages, but I would like to stick with this one if possible
I would like to keep it as OSC/UDP
Any guidance is appreciated

Keep Lua state in a C++ environment to limit context switches

I'm having fun coding simple OpenGL demos and I recently decided to use Lua with my C++ engine in order to change the rendering dynamically without having to recompile on and on my project. Thus I can tweak more easily the rendering algorithm. But I know that my current rendering update functions are probably far from being efficient.
For the moment, I'm transfering a matrix from C++ to Lua, modifying it in a Lua script and sending it back to my C++ rendering engine. But I'm reloading the Lua script each time I get an update call from the C++ engine, and I'm losing all of the variable context. That means I'm always starting from scratch and my rendering is far from being smooth. I include some code sample below to explain what I'm doing. I am currently learning Lua with C++ embedding, so I know I still don't have the best practices.
update.lua
function transform(m)
amplitude = 1.5
frequency = 500
phase = 0.0
r = {}
for i = 1, #m do
r[i] = {}
for j = 1, #m[i] do
if (i % 2) then
r[i][j] = amplitude * math.sin(m[i][j] + phase)
else
r[i][j] = -amplitude * math.sin(m[i][j] + phase)
end
phase = phase + 0.001
end
end
return r
end
-- called by c++
function update()
m = pull()
r = transform(m)
push(r)
end
matrix.cpp
// pull matrix from lua point of view
static int pull(lua_State * _L)
{
_push(_L, &_m);
return 1;
}
// push matrix from lua point of view
static int push(lua_State * _L)
{
// get number of arguments
int n = lua_gettop(_L);
if(1 == n) {
_pull(_L, 1, &_m);
}
return 1;
}
void matrix::load_file(char * file, char * function)
{
int status;
// load the file containing the script we are going to run
status = luaL_loadfile(_L, file);
switch (status) {
case LUA_OK:
break;
case LUA_ERRFILE:
std::cout << "LUA_ERRFILE: " << lua_error(_L) << std::endl;
break;
case LUA_ERRSYNTAX:
std::cout << "LUA_ERRSYNTAX: " << lua_error(_L) << std::endl;
break;
default:
std::cout << lua_error(_L) << std::endl;
}
lua_getglobal(_L, function);
status = lua_pcall(_L, 1, 1, 0);
if (status != LUA_OK) {
std::cout << "error running file" << lua_error(_L) << std::endl;
}
}
void matrix::update()
{
load_file("lua/update.lua", "update");
}
I'm thinking of passing some arguments when calling the update() function, but I'm wondering if the C++ to Lua then back to C++ approach is correct and efficient. Especially considering the fact that I might transfer and modify huge matrix in Lua. I probably lack some embedded Lua knowledge to keep context while loading a script. Do you have some general advice on how I would improve my code ? I know that my current approach is overly complicated.
A quick fix would be to only load the file if it has been modified since the last frame:
static time_t last_modified = 0;
struct stat sbuf;
stat(file, &sbuf);
if (sbuf.st_mtime > last_modified) {
last_modified = sbuf.st_mtime;
status = luaL_loadfile(_L, file);
// etc
}
// Now call the function
lua_getglobal(_L, function);
status = lua_pcall(_L, 1, 1, 0);
OK, loading the chunk of the update() function into a global variable and having a global parameter table in the Lua script is the way to go. I achieved this using the following guidelines, and I will post the detailed steps below. Basically, loading the script entirely first ensures that all global variables are stored in the C++ context. Then storing the wanted function as an index allows us to run it again, while keeping the global variables in the script evolving on their own.
Step 1
First call luaL_loadfile once at init
Step 2
Run the script once using lua_pcall(_L, 0, 0, 0);
This ensures that the global variables, which are used as parameters in the Lua script are in memory.
Step 3
Store the Lua function. I managed to do it with the following C++ code:
void matrix::store(char * function)
{
lua_newtable(_L); // create table for functions
_idx = luaL_ref(_L, LUA_REGISTRYINDEX); // store said table in pseudo-registry
lua_rawgeti(_L, LUA_REGISTRYINDEX, _idx); // retrieve table for functions
lua_getglobal(_L, function); // retrieve function to store
if (lua_isfunction(_L, -1)) {
_f = luaL_ref(_L, -2); // store a function in the function table
}
else {
lua_pop(_L, 1);
std::cout << "can't find " << function << std::endl;
}
// table is two places up the current stack counter
lua_pop(_L, 1); // we are done with the function table, so pop it
std::cout << "idx: " << _idx << ", function: " << _f << std::endl;
}
Step 4
Call the stored function again when rendering using the following C++ function:
void matrix::run()
{
int status;
if (_f == -1) {
std::cout << "invalid function index " << _f << std::endl;
}
else {
lua_rawgeti(_L, LUA_REGISTRYINDEX, _idx); // retrieve function table
lua_rawgeti(_L, -1, _f); // retrieve function
//use function
status = lua_pcall(_L, 0, 0, 0); // 0 arguments, 0 results
if (status != LUA_OK) {
std::cout << "error running function" << lua_error(_L) << std::endl;
}
//don't forget to pop the function table from the stack
lua_pop(_L, 1);
}
}
Step 5 (optional)
If we set all the Lua parameters in a global table, we can retrieve them dynamically in C++ using the following piece of code:
void matrix::get_params(char * p)
{
lua_getglobal(_L, p);
lua_pushnil(_L);
int i = 0;
while(lua_next(_L,-2))
{
const char * key = lua_tostring(_L,-2);
double value = lua_tonumber(_L,-1);
lua_pop(_L,1);
std::cout << key << " = " << value << std::endl;
_h[i].key.assign(key);
_h[i].value = value;
i++;
}
lua_pop(_L, 1);
}
Where _his a simple dynamic structure defined as such:
typedef struct {
std::string key;
float value;
} hash;
I only use float, so this simple structure is convenient enough for my needs, and allows me to add lots of variables in my Lua script without bothering with a structure definition in C++. This way I can add as many parameters in my Lua table and do the maths when updating.
Step 6
Tweak the Lua script forever ! Et voila:
p = {
amplitude = 1.5,
frequency = 500,
phase = 0.0
}
function transform(m)
r = {}
for i = 1, #m do
r[i] = {}
for j = 1, #m[i] do
if (i % 2) then
r[i][j] = p.amplitude * math.sin(m[i][j] + p.phase)
else
r[i][j] = -p.amplitude * math.sin(m[i][j] + p.phase)
end
p.phase = p.phase + 0.001
end
end
return r
end
-- called by c++
function update()
m = pull()
r = transform(m)
push(r)
end
This solution fits my needs, but seems very complicated and inefficient. But it was a fine hacking session anyway.

Z3 Optimizer Unsatisfiability with Real Constraints Using C++ API

I'm running into a problem when trying to use the Z3 optimizer to solve graph partitioning problems. Specifically, the code bellow will fail to produce a satisfying model:
namespace z3 {
expr ite(context& con, expr cond, expr then_, expr else_) {
return to_expr(con, Z3_mk_ite(con, cond, then_, else_));;
}
}
bool smtPart(void) {
// Graph setup
vector<int32_t> nodes = {{ 4, 2, 1, 1 }};
vector<tuple<node_pos_t, node_pos_t, int32_t>> edges;
GraphType graph(nodes, edges);
// Z3 setup
z3::context con;
z3::optimize opt(con);
string n_str = "n", sub_p_str = "_p";
// Re-usable constants
z3::expr zero = con.int_val(0);
// Create the sort representing the different partitions.
const char* part_sort_names[2] = { "P0", "P1" };
z3::func_decl_vector part_consts(con), part_preds(con);
z3::sort part_sort =
con.enumeration_sort("PartID",
2,
part_sort_names,
part_consts,
part_preds);
// Create the constants that represent partition choices.
vector<z3::expr> part_vars;
part_vars.reserve(graph.numNodes());
z3::expr p0_acc = zero,
p1_acc = zero;
typename GraphType::NodeData total_weight = typename GraphType::NodeData();
for (const auto& node : graph.nodes()) {
total_weight += node.data;
ostringstream name;
name << n_str << node.id << sub_p_str;
z3::expr nchoice = con.constant(name.str().c_str(), part_sort);
part_vars.push_back(nchoice);
p0_acc = p0_acc + z3::ite(con,
nchoice == part_consts[0](),
con.int_val(node.data),
zero);
p1_acc = p1_acc + z3::ite(con,
nchoice == part_consts[1](),
con.int_val(node.data),
zero);
}
z3::expr imbalance = con.int_const("imbalance");
opt.add(imbalance ==
z3::ite(con,
p0_acc > p1_acc,
p0_acc - p1_acc,
p1_acc - p0_acc));
z3::expr imbalance_limit = con.real_val(total_weight, 100);
opt.add(imbalance <= imbalance_limit);
z3::expr edge_cut = zero;
for(const auto& edge : graph.edges()) {
edge_cut = edge_cut +
z3::ite(con,
(part_vars[edge.node0().pos()] ==
part_vars[edge.node1().pos()]),
zero,
con.int_val(edge.data));
}
opt.minimize(edge_cut);
opt.minimize(imbalance);
z3::check_result opt_result = opt.check();
if (opt_result == z3::check_result::sat) {
auto mod = opt.get_model();
size_t node_id = 0;
for (z3::expr& npv : part_vars) {
cout << "Node " << node_id++ << ": " << mod.eval(npv) << endl;
}
return true;
} else if (opt_result == z3::check_result::unsat) {
cerr << "Constraints are unsatisfiable." << endl;
return false;
} else {
cerr << "Result is unknown." << endl;
return false;
}
}
If I remove the minimize commands and use a solver instead of an optimize it will find a satisfying model with 0 imbalance. I can also get an optimize to find a satisfying model if I either:
Remove the constraint imbalance <= imbalance_limit or
Make the imbalance limit reducible to an integer. In this example the total weight is 8. If the imbalance limit is set to 8/1, 8/2, 8/4, or 8/8 the optimizer will find satisfying models.
I have tried to_real(imbalance) <= imbalance_limit to no avail. I also considered the possibility that Z3 is using the wrong logic (one that doesn't include theories for real numbers) but I haven't found a way to set that using the C/C++ API.
If anyone could tell me why the optimizer fails in the presence of the real valued constraint or could suggest improvements to my encoding it would be much appreciated. Thanks in advance.
Could you reproduce the result by using opt.to_string() to dump the state (just before the check())? This would create a string formatted in SMT-LIB2 with optimization commands. It is then easier to exchange benchmarks. You should see that it reports unsat with the optimization commands and sat if you comment out the optimization commands.
If you are able to produce a bug, then post an issue on GitHub.com/z3prover/z3.git with a repro.
If not, you can use Z3_open_log before you create the z3 context and record a rerunnable log file. It is possible (but not as easy) to dig into unsoundness bugs that way.
It turns out that this was a bug in Z3. I created an Issue on GitHub and they have since responded with a patch. I'm compiling and testing the fix now, but I expect it to work.
Edit: Yup, that patch fixed the issue for the command line tool and the C++ API.

Program takes (a lot) longer than it should due to a function call that never runs

I'm currently running Bayesian Optimization, written in c++. I use a toolbox call Bayesopt from Ruben Martinez-Cantin (http://rmcantin.bitbucket.org/html/). I'm doing my thesis about Bayesian Optimization (https://en.wikipedia.org/wiki/Bayesian_optimization).
I had previously experimented with this toolbox and I have noticed this week that the code is running a lot slower than I remembered. It's worth mentioning that I did write some code that works with this toolbox.
I decided to try to understand why this was happening and I did witness that the code was running much slower than it should.
To try to understand if it was my code's fault or otherwise, I tried an example that doesn't use any of my code.
Consider the following example:
#include <iostream>
#include <bayesopt.hpp>
class ExampleMichalewicz: public bayesopt::ContinuousModel
{
public:
ExampleMichalewicz(bopt_params par);
double evaluateSample(const vectord& x);
bool checkReachability(const vectord &query) {return true;};
void printOptimal();
private:
double mExp;
};
ExampleMichalewicz::ExampleMichalewicz(bopt_params par):
ContinuousModel(10,par)
{
mExp = 10;
}
double ExampleMichalewicz::evaluateSample(const vectord& x)
{
size_t dim = x.size();
double sum = 0.0;
for(size_t i = 0; i<dim; ++i)
{
double frac = x(i)*x(i)*(i+1);
frac /= M_PI;
sum += std::sin(x(i)) * std::pow(std::sin(frac),2*mExp);
}
return -sum;
}
void ExampleMichalewicz::printOptimal()
{
std::cout << "Solutions: " << std::endl;
std::cout << "f(x)=-1.8013 (n=2)"<< std::endl;
std::cout << "f(x)=-4.687658 (n=5)"<< std::endl;
std::cout << "f(x)=-9.66015 (n=10);" << std::endl;
}
int main(int nargs, char *args[])
{
bopt_params par = initialize_parameters_to_default();
par.n_iterations = 20;
par.n_init_samples = 30;
par.random_seed = 0;
par.verbose_level = 1;
par.noise = 1e-10;
par.kernel.name = "kMaternARD5";
par.crit_name = "cBEI";
par.crit_params[0] = 1;
par.crit_params[1] = 0.1;
par.n_crit_params = 2;
par.epsilon = 0.0;
par.force_jump = 0.000;
par.verbose_level = 1;
par.n_iter_relearn = 1; // Number of samples before relearn kernel
par.init_method = 1; // Sampling method for initial set 1-LHS, 2-Sobol (if available),
par.l_type = L_MCMC; // Type of learning for the kernel params
ExampleMichalewicz michalewicz(par);
vectord result(10);
michalewicz.optimize(result);
std::cout << "Result: " << result << "->"
<< michalewicz.evaluateSample(result) << std::endl;
michalewicz.printOptimal();
return 0;
}
If I compile this example alone, the run time is about 23 seconds.
With this cmake file
PROJECT ( myDemo )
ADD_EXECUTABLE(myDemo ./main.cpp)
find_package( Boost REQUIRED )
if(Boost_FOUND)
include_directories(${Boost_INCLUDE_DIRS})
else(Boost_FOUND)
find_library(Boost boost PATHS /opt/local/lib)
include_directories(${Boost_LIBRARY_PATH})
endif()
include_directories(${PROJECT_SOURCE_DIR}/include)
include_directories("../bayesopt/include")
include_directories("../bayesopt/utils")
set(CMAKE_CXX_FLAGS " -Wall -std=c++11 -lpthread -Wno-unused-local-typedefs -DNDEBUG -DBOOST_UBLAS_NDEBUG")
target_link_libraries(myDemo libbayesopt.a libnlopt.a)
Now consider the same main example, but where I add three additional files to my cmake project (without including them in main.cpp). These three files are subpart of all my code.
PROJECT ( myDemo )
ADD_EXECUTABLE(myDemo ./iCubSimulator.cpp ./src/DatasetDist.cpp ./src/MeanModelDist.cpp ./src/TGPNode.cpp)
find_package( Boost REQUIRED )
if(Boost_FOUND)
include_directories(${Boost_INCLUDE_DIRS})
else(Boost_FOUND)
find_library(Boost boost PATHS /opt/local/lib)
include_directories(${Boost_LIBRARY_PATH})
endif()
include_directories(${PROJECT_SOURCE_DIR}/include)
include_directories("../bayesopt/include")
include_directories("../bayesopt/utils")
set(CMAKE_CXX_FLAGS " -Wall -std=c++11 -lpthread -Wno-unused-local-typedefs -DNDEBUG -DBOOST_UBLAS_NDEBUG")
target_link_libraries(myDemo libbayesopt.a libnlopt.a)
This time, the run time is about 3 minutes. This is critical in my work since if I increase par.n_iterations it tends to get much worse.
I further arrived at the conclusion that if I comment a line in TGPNode.cpp
utils::cholesky_decompose(K,L); (NOTICE THAT THIS LINE IS NEVER CALLED).
I get the 23 seconds. This function belongs to a file: ublas_cholesky.hpp, from the bayesopt toolbox.
It is also important to note that the same function is also called within the toolbox code. This line is not commented and it runs during michalewicz.optimize(result);.
Does anyone have any ideia why this is happening? It would be a great help if anyone has some insight about the subject.
Greatly appreciated.
Kindly, José Nogueira
It's not gonna return.
It's going to infinitely recurse (to a stack overflow).
Here's what the code reads like:
bopt_params initialize_parameters_to_default(void)
{
bayesopt::Parameters par;
return par.generate_bopt_params();
And generate_bopt_params:
bopt_params Parameters::generate_bopt_params(){
bopt_params c_params = initialize_parameters_to_default();
It looks like someone tried to remove code duplication without actually testing things. At all. You could reinstate the commented out body of the first function