I need to form HOGDescriptor::setSVMDetector() input.
I compute descriptors with openCV then use libSVM to get model file.
To form input i know that i need to get support vectors' values and elementwise mul them with alphas (then add -rho at the end), but i don't get where to get these alphas.
I have a list of SVs like:
1 1:-0.0434783 2:0.153846 3:0.194444 4:-0.353712 5:-0.45054
1 1:-0.2173916 2:-0.38461 3:0.222262 4:-0.676686 5:-0.78062
but where to get alphas?
Ok, it seems things are clear now.
Alphas are the first column in my case.
Since i had all of them equal to -1 or 1 (dunno why) in my test model, i thought these were labels.
Anyway, here is my parser (but you need to leave only SVs in file):
std::ifstream ifs("cars_model.model");
const int nsv = 90;
const int nfeatures = 144;
float rho = 12.5459;
char ts[4000] = ""; // !
std::vector<float> res(nfeatures,0);
std::vector<float> alphas;
Mat_<float> temp(nsv, nfeatures);
int c = 0;
std::cout << "Loading model file...\n";
for (int i=0; i<nsv; i++) {
float al = 0;
ifs >> al;
alphas.push_back(al);
for (int j=0; j<nfeatures; j++) {
float ind, s;
char junk;
ifs >> ind >> junk >> s;
temp.at<float>(c, j) = s;
//std::cout << f << ' ' << s << '\n';
}
c++;
}
ifs.close();
std::cout << "Computing primal form...\n";
for (int i=0; i<nsv; i++) {
float alpha = alphas[i];
for (int j=0; j<nfeatures; j++) {
res[j] += (temp.at<float>(i,j) * alpha);
}
}
//res.push_back(-rho);
std::ofstream ofs("primal.txt");
for (int i=0; i<res.size(); i++)
ofs << res[i] << ' ';
ofs.close();
And you know, it works. You can set rho as threshold of the detector.
But why do you want to classify this "by hand"? OpenCv has a classification routine called predict, which uses found SVs' and alphas'
float response = SVM.predict(sampleMat);
If you really want to do it by yourself you would not only need SVs and alphas but also a kernel function used for training and comput
SUM alpha_i K( support_vector_i , data_point ) - rho
I am not sure whether it is possible to extract alphas "by hand" without extending the SVM class, as one can see in the sources - alphas are stored in the CvSVMDecisionFunc structure:
struct CvSVMDecisionFunc
{
double rho;
int sv_count;
double* alpha;
int* sv_index;
};
while the only reference to this structure is in the protected section:
protected:
(...)
CvSVMDecisionFunc* decision_func;
From the source code of svm.cpp we can find, that it is only publically accesible through the save routine. So some "hack" would be to save the model and extract alphas from there (it will be located in the "Decision function" section, written in human readable format).
The simplest extracion technique seems to extent the CvSVM class and include method like
public:
CvSVMDecisionFunc* get_decision_function() { return decision_func; }
update
after clarification, that OP actually is trying to use externaly trained model in opencv - the easiest way is to convert the libsvm model crated by other method (libsvm, linearsvm etc.) into opencv compatible format and load it using read method
void CvSVM::read( CvFileStorage* fs, CvFileNode* svm_node )
see source for more details.
Related
My program opens a file which contains 100,000 numbers and parses them out into a 10,000 x 10 array correlating to 10,000 sets of 10 physical parameters. The program then iterates through each row of the array, performing overlap calculations between that row and every other row in the array.
The process is quite simple, and being new to c++, I programmed it the most straightforward way that I could think of. However, I know that I'm not doing this in the most optimal way possible, which is something that I would love to do, as the program is going to face off against my cohort's identical program, coded in Fortran, in a "race".
I have a feeling that I am going to need to implement multithreading to accomplish my goal of speeding up the program, but not only am I new to c++, I am new to multithreading, so I'm not sure how I should go about creating new threads in a beneficial way, or if it is even something that would give me that much "gain on investment" so to speak.
The program has the potential to be run on a machine with over 50 cores, but because the program is so simple, I'm not convinced that more threads is necessarily better. I think that if I implement two threads to compute the complex parameters of the two gaussians, one thread to compute the overlap between the gaussians, and one thread that is dedicated to writing to the file, I could speed up the program significantly, but I could also be wrong.
CODE:
cout << "Working...\n";
double **gaussian_array;
gaussian_array = (double **)malloc(N*sizeof(double *));
for(int i = 0; i < N; i++){
gaussian_array[i] = (double *)malloc(10*sizeof(double));
}
fstream gaussians;
gaussians.open("GaussParams", ios::in);
if (!gaussians){
cout << "File not found.";
}
else {
//generate the array of gaussians -> [10000][10]
int i = 0;
while(i < N) {
char ch;
string strNums;
string Num;
string strtab[10];
int j = 0;
getline(gaussians, strNums);
stringstream gaussian(strNums);
while(gaussian >> ch) {
if(ch != ',') {
Num += ch;
strtab[j] = Num;
}
else {
Num = "";
j += 1;
}
}
for(int c = 0; c < 10; c++) {
stringstream dbl(strtab[c]);
dbl >> gaussian_array[i][c];
}
i += 1;
}
}
gaussians.close();
//Below is the process to generate the overlap file between all gaussians:
string buffer;
ofstream overlaps;
overlaps.open("OverlapMatrix", ios::trunc);
overlaps.precision(15);
for(int i = 0; i < N; i++) {
for(int j = 0 ; j < N; j++){
double r1[6][2];
double r2[6][2];
double ol[2];
//compute complex parameters from the two gaussians
compute_params(gaussian_array[i], r1);
compute_params(gaussian_array[j], r2);
//compute overlap between the gaussians using the complex parameters
compute_overlap(r1, r2, ol);
//write to file
overlaps << ol[0] << "," << ol[1];
if(j < N - 1)
overlaps << " ";
else
overlaps << "\n";
}
}
overlaps.close();
return 0;
Any suggestions are greatly appreciated. Thanks!
Dear Stack Community,
I'm doing a DSP exercise to complement my C++ FIR lowpass filter with filter coefficients designed in and exported from Matlab. The DSP exercise in question is the act of decimating the output array of the FIR lowpass filter to a lower sample rate by a factor of 'M'. In C++ I made a successful but extremely simple implementation within a .cpp file and I've been trying hard to convert it to a function to which I can give the output array of the FIR filter. Here is the very basic version of the code:
int n = 0;
int length = 50;
int M = 12;
float array[length];
float array2[n];
for (int i = 0 ; i<length; i++) {
array[i] = std::rand();
}
for (int i = 0; i<length; i=i+M) {
array2[n++] = array[i];
}
for (int i = 0; i<n; i++) {
std::cout << i << " " << array2[i] << std::endl;
}
As you can see very simple. My attempt to convert this to a function using is unfortunately not working. Here is the function as is:
std::vector<float> decimated_array(int M,std::vector<float> arr){
size_t n_idx = 0;
std::vector<float> decimated(n_idx);
for (int i = 0; i<(int)arr.size(); i = i + M) {
decimated[n_idx++] = arr[i];
}
return decimated;
}
This produces a very common Xcode error of EXC_BAD_ACCESS when using this section of code in the .cpp file. The error occurs in the line 'decimated[n_idx++] = arr[i];' specifically:
int length = 50;
int M = 3;
std::vector<float> fct_array(length);
for (int i = 0 ; i<length; i++) {
fct_array[i] = std::rand();
}
FIR_LPF test;
std::vector<float> output;
output = test.decimated_array(M,fct_array);
I'm trying to understand what is incorrect with my application of or perhaps just my translation of the algorithm into a more general setting. Any help with this matter would be greatly appreciated and hopefully this is clear enough for the community to understand.
Regards, Vhaanzeit
The issue:
size_t n_idx = 0;
std::vector<float> decimated(n_idx);
You did not size the vector before you used it, thus you were invoking undefined behavior when assigning to element 0, 1, etc. of the decimated vector.
What you could have done is in the loop, call push_back:
std::vector<float> decimated_array(int M,std::vector<float> arr)
{
std::vector<float> decimated;
for (size_t i = 0; i < arr.size(); i = i + M) {
decimated.push_back(arr[i]);
}
return decimated;
}
The decimated vector starts out empty, but a new item is added with the push_back call.
Also, you should pass the arr vector by const reference, not by value.
std::vector<float> decimated_array(int M, const std::vector<float>& arr);
Passing by (const) reference does not invoke a copy.
Edit: Changed loop counter to correct type, thus not needing the cast.
I was wondering if there is a more efficient way to remove columns or rows that are all zero elements. I am sure there is using the functions in the eigen library but I do not know how.
Right now I am doing it like so, with the idea of the while loop being used in case there are multiple rows/columns that sum to zero I dont want to exceed range limits or pass any zero rows.
void removeZeroRows() {
int16_t index = 0;
int16_t num_rows = rows();
while (index < num_rows) {
double sum = row(index).sum();
// I use a better test if zero but use this for demonstration purposes
if (sum == 0.0) {
removeRow(index);
}
else {
index++;
}
num_rows = rows();
}
}
Currently (Eigen 3.3), there is no direct functionality for this (though it is planned for Eigen 3.4).
Meanwhile can use something like this (of course, row and col can be interchanged, and output is just for illustration):
Eigen::MatrixXd A;
A.setRandom(4,4);
A.col(2).setZero();
// find non-zero columns:
Eigen::Matrix<bool, 1, Eigen::Dynamic> non_zeros = A.cast<bool>().colwise().any();
std::cout << "A:\n" << A << "\nnon_zeros:\n" << non_zeros << "\n\n";
// allocate result matrix:
Eigen::MatrixXd res(A.rows(), non_zeros.count());
// fill result matrix:
Eigen::Index j=0;
for(Eigen::Index i=0; i<A.cols(); ++i)
{
if(non_zeros(i))
res.col(j++) = A.col(i);
}
std::cout << "res:\n" << res << "\n\n";
Generally, you should avoid resizing a matrix at every iteration, but resize it to the final size as soon as possible.
With Eigen 3.4 something similar to this will be possible (syntax is not final yet):
Eigen::MatrixXd res = A("", A.cast<bool>().colwise().any());
Which would be equivalent to Matlab/Octave:
res = A(:, any(A));
////////////////////MAKE INPUT VALUES////////////////////
double *NumOfInputsPointer = NULL;
std::cout << "How many inputs?" << std::endl;
int NumOfInputs;
std::cin >> NumOfInputs;
NumOfInputsPointer = new double[NumOfInputs];
std::cout << std::endl;
double InputVal;
for(int a = 0; a < NumOfInputs; a++)
{
std::cout << "What is the value for input " << a << std::endl;
a+1;
std::cin >> InputVal;
*(NumOfInputsPointer + a) = InputVal;
}
std::cout << std::endl;
////////////////////MAKE WEIGHTS////////////////////
double *NumOfWeightsPointer = NULL;
int NumOfWeights;
NumOfWeightsPointer = new double[NumOfWeights];
double WightVal;
for(int a = 0; a < NumOfInputs; a++)
{
*(NumOfWeightsPointer + a) = 0.5;
}
////////////////////Multiplication BRAIN BROKE!!!!!////////////////////
double *MultiplyPointer = NULL;
MultiplyPointer = NumOfInputsPointer;
for(int a = 0; a < NumOfInputs; a++)
{
//Stuff to do things
}
The code above is going to make a single Artificial Neuron. I already have it built to make an array with the users wanted number of inputs which then automatically makes every inputs weight 0.5.
The wall I have hit, has caused me to struggle with the multiplication of the input values array with their weights array, then save those in another array to be added together latter and then go through a modifier.
My struggle is with the multiplication and saving it into an array. I hope I explained my problem well enough.
There are many problems with this code. I would highly recommend using std::vector instead of arrays. If every input has a constant weight of 0.5, then what's the point of creating an array where all elements are 0.5? Just create a constant variable representing the 0.5 weight and apply it to each input. The second array is unnecessary from what I can tell. Creating the last array (again, this would be easier with a vector) would be similar to the first one because the size is going to be the same. It is based on the number of inputs. So just create an array of the same size, loop through each element in the first array, do the multiplication using the constant I described above, and then store the result into the new array.
Just new it like you did with the others, and store the result of the multiplication there.
MultiplyPointer = new double[NumOfInputs];
for (a = 0; a < NumOfInputs; a++) {
MultiplyPointer[a] = NumOfWeightsPointer[a] * NumOfInputsPointer[a];
}
That being said, there are better ways to go about solving your problem. std::vector has been mentioned, which makes the memory management and looping bits easier. I would go a step further and incorporate a library with the notions of a matrix and matrix expressions, such as OpenCV or dlib.
Example using Mat from OpenCV:
cv::Mat input(NumOfInputs, 1, CV_64F, NumOfInputsPointer);
cv::Mat weights(NumOfInputs, 1, CV_64F, cv::Scalar(0.5));
cv::Mat result = input.mul(weights);
If the weights vector is not to be modified and reused, just skip the whole thing:
cv::Mat result = input.mul(cv::Scalar(0.5));
I'm a complete beginner to this.I'll try to explain myself as much as i can.
int i, j;
string filename;
cout << "Please enter the file name: " << endl;
cin >> filename;
fstream stream;
stream.open(filename.c_str(),
ios::in|ios::out|ios::binary);
int file_size = get_int(stream, 2);
int start = get_int(stream, 10);
int width = get_int(stream, 18);
int height = get_int(stream, 22);
This part should get the file and it's values.
for ( i = 0; i < height; i++ )
{
for ( j = 0; j < width; j++)
{
for (int k = 0; k < split*split; k++){
int pos = stream.tellg();
int blue = stream.get();
int green = stream.get();
int red = stream.get();
And this reaches inside each pixel and gets RBG values.
What i want is to first store RBG values into a 2D array and then do some manipulations on that array.Then i'd like to create a new file with manipulated image.
I've no clue so some instructions along with some code would be really helpfull.
Bmp file format is documented in many places. For example, on wikipedia.
The easiest way would be to implement structure that describes bmp header, and read entire structure in one go, then read individual pixels.
Your reading function is broken and doesn't function because you did not read file signature - "BM" field of the header.
On some operating system all there are already strcutures and functions for reading BMPs. On windows, there's BITMAPFILEHEADER. Using those structures means you aren't using "pure C++".
If you still want to read BMP yourself, read msdn articles aboud bmp or google for "read bmp file" tutorials.
This library is very easy to use http://easybmp.sourceforge.net/. U can easily check RGB values after loading the file.