Segmentation fault (core dumped) when using larger boost-matrices - c++

If I run the code the error occurs: Segmentation fault (core dumped)
The problem does not exist if I decrease the dimensions of MatSamplesDimM in the header-file (e.g. from 300000 to 30000) or if I just declare Matrix1 and not Matrix2 in the main-file. The problem is independent of environment in which I run the code (eclipse or from terminal).
Do you have any ideas? Thank you very much for your help!
StrangeBehavior.cpp is my main-File:
// System includes
#include <iostream>
#include <fstream> //fstream: Stream class to both read and write from/to files; for output
// Boost includes
#include <boost/numeric/ublas/vector.hpp>
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/io.hpp>
#include <boost/assign/list_of.hpp>
#include <boost/math/distributions/normal.hpp> //to create normal distribution
#include <vector> // stl vector header
#include <boost/accumulators/accumulators.hpp>
#include <boost/accumulators/statistics/stats.hpp> //accumulator for mean, variance...
#include <boost/accumulators/statistics/mean.hpp>
#include <boost/bind/bind.hpp>
#include <boost/numeric/ublas/vector_proxy.hpp> // for using row()...
#include <boost/numeric/ublas/matrix_proxy.hpp> // for using row()...
#include <boost/numeric/ublas/vector_expression.hpp> // for using inner_prod
#include <boost/numeric/ublas/matrix_expression.hpp> // for using inner_prod, element_div
#include <boost/math/special_functions/gamma.hpp> // for Gamma function
#include "ParameterDeclaration.hpp"
using namespace std; // saves us typing std:: before vector
int main() {
std::cout << "Starting " << std::endl;
MatSamplesDimM Matrix1;
MatSamplesDimM Matrix2;
std::cout << "Finishing" << std::endl;
return 0;
}
and ParameterDeclaration.hpp my header-File:
#ifndef parameter_
#define parameter_
#include <iostream>
namespace ublas = boost::numeric::ublas;
typedef ublas::bounded_matrix<double,300000,2> MatSamplesDimM;
#endif

Try using the heap instead of the stack. This means allocate the matrices using new.

Related

How to avoid error message: reference to "is_empty" is ambiguous

So in my code, I've used a method is_empty()in the boost library. I know is_empty has two definitions, one is in the filesystem library of boost:
std::filesystem:is_empty(const std::filesystem::path& p).
And another is in std::integral_constant:
inline constexpr bool is_empty_v = is_empty<T>::value;
I intended to use the one that is in the boost library, but how can I do it? I tried to not do using namespace std at the beginning. Instead, every time I need to cout, I'd use std::cout to avoid the possible ambiguous on my is_empty method. But I still got the same error if I do this. Is there any other way that can resolve this misconception? I attached part of my code snippet below. Thank you!
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "boost/filesystem.hpp"
#include <iostream>
#include <fstream>
#include <stdio.h>
#include <stdlib.h>
...
using namespace boost::filesystem;
using namespace cv;
... inside main() function:
...
directory_iterator itr(param.referFolder);
if( is_empty(itr->path()) )
{
std::cout << "ERROR: ReferenceFolder is empty. Please place reference images inside." << std::endl;
return 1;
}
... ...
for(directory_iterator itr(param.dataFolder); itr != end_itr; ++itr) // search every folder
{
if(is_directory(itr->status()) && !is_empty(itr->path())){
... ...
}
}

Clarifications on the discussion in Accelerated C++ header files

I am reading Accelerated C++ Chapter 4: Organizing Programs and Data. I came across the following piece of code, which is saved to a file called median.cpp:
// median.cpp file contents
#include <algorithm>
#include <stdexcept>
#include <vector>
using std::domain_error;
using std::sort;
using std::vector;
// Compute the median of a vector<double>
double median(vector<double> vec)
{
// Code for what the median function does goes here
}
and then, later on on page 67., the authors discuss an include guard like below, which is saved to a file called median.h.
#ifndef GUARD_median_h
#define GUARD_median_h
// median.h - final version
#include<vector>
double median(std::vector<double>);
//
#endif
Are we supposed to "include" the median.h into median.cpp before compiling, like below?
// median.cpp file contents
#include "median.h"
#include <algorithm>
#include <stdexcept>
#include <vector>
using std::domain_error;
using std::sort;
using std::vector;
// Compute the median of a vector<double>
double median(vector<double> vec)
{
// Code for what the median function does goes here
}
I am asking this because, in section 4.3 where the authors discuss this topic, they have failed to mention how someone is supposed to make use of median.cpp and median.h for compilation.

OpenCV: extractor->descriptorSize() - Segfault

I'm trying to follow this tutorial for object detection but I stuck at the beginning.
Until now my code is this:
#include <stdio.h>
#include <stdlib.h>
#include <opencv2/opencv.hpp>
#include <fstream>
#include <iostream>
#include <string>
#include <dirent.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/types.h>
using namespace cv;
using namespace std;
int main() {
Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("SURF");
//Mat training_descriptors(1, extractor->descriptorSize(), extractor->descriptorType());
extractor->descriptorSize();
return 0;
}
The following line extractor->descriptorSize(); gives a Segmentation fault (core dumped) and I don't know why. Do you have any ideas?
I found out that the nonfree module of OpenCV was not installed. After installation I included the nonfree library #include <opencv2/nonfree/nonfree.hpp> and then called cv::initModule_nonfree();. The problem is solved.

SIGSEGV while debugging in CodeBlocks

I am running 32 Bit CodeBLocks(10.05) in my 64 Bit machine. It produces SIGSEGV all time. This is occurring in my Office PC. But in my home PC (32 bit and 10.05 codeblocks) no problem occur. I am attaching Call Stack window. Is there any problem with kernel32.dll ?
Please note that, for testing I just wrote a line of code. But still it produces SIGSEGV. I re-installed CodeBlocks 2-3 times. Is there any problem in my System ? I am almost crazy. :'(
As requested I am putting the simple code here:
#include <set>
#include <map>
#include <list>
#include <cmath>
#include <ctime>
#include <queue>
#include <stack>
#include <cctype>
#include <cstdio>
#include <string>
#include <vector>
#include <cassert>
#include <cstdlib>
#include <cstring>
#include <sstream>
#include <iostream>
#include <algorithm>
using namespace std;
int main()
{
//READ("input.txt");
//WRITE("output.txt");
int i, j, k;
int TC, tc;
int x0, y0, x1, y1;
cout << "hi";
return 0;
}

How to load Multidimensional array values into vector?

This is part of the code (header and the main part):
#include <iostream>
#include <sstream>
#include <string>
#include <gl\GL.h>
#include <gl\GLU.h>
#include <glut.h>
#include <RassHost.h>
#include <api\iomap.h>
#include <api\iotrans.h>
#include <api\cgeometry.h>
#include <vector>
using namespace std;
int main()
{
cout << "Enter IP: " << endl;
getline(cin, server_ip);
enum(KEY_L = 'A', KEY_R = 'D', KEY_RUN = 'WW', KEY_JUMP='SPACE');
typedef OBJECT_3D_SYS_TYPES_NUM OBJECT3D_RCN_TYPE;
OBJECT3D_RCN_TYPE _psyObjects[][] = getPsyhicsPartObjects();
vector<OBJECT3D_RCN_TYPE> _objects;
//I would like to load _psyObjects[][] into vector<OBJECT3D_RCN_TYPE> _objects;
Server::StartGame(Server::getIP(), 8888, "-r run", false);
system("pause");
return 0;
}
Is it possible to copy _psyObjects values into vector<OBJECT3D_RCN_TYPE>?
I want to control the multidimensional array with vector api, if it is possible.
Thanks!
You'll need to create a vector of vectors:
vector< vector<OBJECT3D_RCN_TYPE> > _objects;
Then just fill it like a normal vector.
I'd post more code, but you need to know the dimensions of the array, and I can't see those from the code.
You could also use a Boost::multi_array. It's api is like std::vector's, but possibly similar enough to meet your needs.