Related
I'm getting an error when I try to send a matrix into a proc. I'm pretty sure I'm doing something very wrong, can't figure it out.
use LinearAlgebra;
proc main() {
var A = Matrix(
[0.0, 0.8, 1.1, 0.0, 2.0]
,[0.8, 0.0, 1.3, 1.0, 0.0]
,[1.1, 1.3, 0.0, 0.5, 1.7]
,[0.0, 1.0, 0.5, 0.0, 1.5]
,[2.0, 0.0, 1.7, 1.5, 0.0]
);
check_dims(A);
}
proc check_dims(A: Matrix) {
var t: bool = false;
if (A.domain.dim(1) == A.domain.dim(2)){
t = true;
}
return t;
}
Gives me
mad.chpl:3: In function 'main':
mad.chpl:14: error: unresolved call 'check_dims([domain(2,int(64),false)] real(64))'
mad.chpl:17: note: candidates are: check_dims(A: Matrix)
I'm using chpl Version 1.15.0
Linear algebra objects (like matrices and vectors) are represented as arrays in Chapel. Therefore, changing Matrix (a type that does not exist) to [] (the syntax for array-type) should work as expected:
use LinearAlgebra;
proc main() {
var A = Matrix(
[0.0, 0.8, 1.1, 0.0, 2.0]
,[0.8, 0.0, 1.3, 1.0, 0.0]
,[1.1, 1.3, 0.0, 0.5, 1.7]
,[0.0, 1.0, 0.5, 0.0, 1.5]
,[2.0, 0.0, 1.7, 1.5, 0.0]
);
check_dims(A);
}
proc check_dims(A: []) {
var t: bool = false;
// method is dim()
if (A.domain.dim(1) == A.domain.dim(2)){
t = true;
}
return t;
}
I'm not sure if I should ask this at StackOverflow or at CodeReview. But because I couldn't find a similair problem, I'm posting it here.
ATM I'm optimizing the code of a simple image manipulation application.
One of the targets is to give all the convolution effect classes their respective matrix as their private variable. Because const-correctness this wasn't done earlier, but the quickfix to solve the introduced problems with const-correctness wasn't sparely with memory and cpu-cycles.
So I decided to init the convolutionMatrix at class-initializiation-level with a std::vector<std::vector<double>> since creating dozens of constructors to make each initialisation with std::array<std::array<double>,\d+>,\d+> possible, is inefficient.
BlurFilter::BlurFilter() : ColorEffect(), convolutionMatrix(
std::vector<std::vector<double>>{
std::vector<double>{ 0.0, 0.0, 1.0, 0.0, 0.0 },
std::vector<double>{ 0.0, 1.0, 1.0, 1.0, 0.0 },
std::vector<double>{ 1.0, 1.0, 1.0, 1.0, 1.0 },
std::vector<double>{ 0.0, 1.0, 1.0, 1.0, 0.0 },
std::vector<double>{ 0.0, 0.0, 1.0, 0.0, 0.0 }
} )
{
}
However. the application breaks at runtime in the Matrix constructor with an std::out_of_range-exception with the what()-message:
what(): vector::_M_range_check: __n (which is 0) >= this->size() (which is 0)"
Aka multidimensionalVector.size() is somehow 0.
Matrix::Matrix( const std::vector<std::vector<double>>& multidimensionalVector )
{
this->xLength = multidimensionalVector.size();
this->yLength = ( *multidimensionalVector.begin() ).size();
this->values = multidimensionalVector;
}
Honestly I don't understand why the size of the multidimentsionalVector is zero at that moment at all since I'm passing an initialized vector of vectors which could be -as shown- copy-constructed (or move-constructed) over to the values-variable of the Matrix class. Changing multidimensionalVector copy-by-value don't make the difference.
Could someone explain where and/or what is going wrong here?
(PS: I'd prefer answers written in own words (aka in Plain English) instead of citing directly from the C++ standard documents because of the used vague and confusing academic/scientific language).
FWIW, you can simplify your code quite a bit. Here's an example that works:
#include <iostream>
#include <vector>
struct Matrix
{
Matrix(const std::vector<std::vector<double>>& multidimensionalVector);
size_t xLength;
size_t yLength;
std::vector<std::vector<double>> values;
};
Matrix::Matrix( const std::vector<std::vector<double>>& multidimensionalVector ) : xLength(0), yLength(0), values(multidimensionalVector)
{
this->xLength = values.size();
std::cout << "xLength: " << xLength << std::endl;
if ( xLength > 0 )
{
this->yLength = ( *values.begin() ).size();;
}
std::cout << "yLength: " << yLength << std::endl;
}
struct BlurFilter
{
BlurFilter();
Matrix convolutionMatrix;
};
BlurFilter::BlurFilter() : convolutionMatrix( { { 0.0, 0.0, 1.0, 0.0, 0.0 },
{ 0.0, 1.0, 1.0, 1.0, 0.0 },
{ 1.0, 1.0, 1.0, 1.0, 1.0 },
{ 0.0, 1.0, 1.0, 1.0, 0.0 },
{ 0.0, 0.0, 1.0, 0.0, 0.0 } } )
{
}
int main()
{
BlurFilter f;
}
Well, This is embarrassing of me.
It seems I've fixed my own problem unintentionally when I was stripping and optimizing my code for posting this question. I didn't thought about to try out the streamlined code before posting it.
Putting
this->values = multidimensionalVector;
did the job.
The original of the Matrix-constructor which seemed to raise the std::out_of_range-exception was this:
Matrix::Matrix( const std::vector<std::vector<double>>& multidimensionalVector )
{
this->xLength = multidimensionalVector.size();
this->yLength = ( *multidimensionalVector.begin() ).size();
for( int x = 0; x < this->xLength; x++ )
{
for( int y = 0; y < this->yLength; y++ )
{
this->set( x, y, multidimensionalVector.at( x ).at( y ) );
}
}
}
Within Matrix::set(int x, int y, double newValue ), the x and y parameters are always checked if they're in between -1 and this->xLength && between -1 and this->yLength.
But the x and y parameters are never checked if they're in bounds with the (then not initialized) this->values...
static const double convTable[4][4] =
{
{1.0, 1000.0, 1000000.0, 1000000000,0 },
{0.001, 1.0, 1000.0, 1000000,0 },
{0.000001, 0.001, 1.0, 1000.0 },
{0.000000001, 0.000001, 0.001, 1,0 }
};
I have this array, in a header file, but it won't compile, Not sure why?
You are using commas instead of points in some of the items, so you have more than 4 items per row.
{1.0, 1000.0, 1000000.0, 1000000000,0 }
^
should be
{1.0, 1000.0, 1000000.0, 1000000000.0 }
I'm currently using FileStorage class for storing matrices XML/YAML using OpenCV C++ API.
However, I have to write a Python Script that reads those XML/YAML files.
I'm looking for existing OpenCV Python API that can read the XML/YAML files generated by OpenCV C++ API
You can use PyYAML to parse the YAML file.
Since PyYAML doesn't understand OpenCV data types, you need to specify a constructor for each OpenCV data type that you are trying to load. For example:
import yaml
def opencv_matrix(loader, node):
mapping = loader.construct_mapping(node, deep=True)
mat = np.array(mapping["data"])
mat.resize(mapping["rows"], mapping["cols"])
return mat
yaml.add_constructor(u"tag:yaml.org,2002:opencv-matrix", opencv_matrix)
Once you've done that, loading the yaml file is simple:
with open(file_name) as fin:
result = yaml.load(fin.read())
Result will be a dict, where the keys are the names of whatever you saved in the YAML.
Using the FileStorage functions available in OpenCV 3.2, I've used this with success:
import cv2
fs = cv2.FileStorage("calibration.xml", cv2.FILE_STORAGE_READ)
fn = fs.getNode("Camera_Matrix")
print (fn.mat())
In addition to #misha's response, OpenCV YAML's are somewhat incompatible with Python.
Few reasons for incompatibility are:
Yaml created by OpenCV doesn't have a space after ":". Whereas Python requires it. [Ex: It should be a: 2, and not a:2 for Python]
First line of YAML file created by OpenCV is wrong. Either convert "%YAML:1.0" to "%YAML 1.0". Or skip the first line while reading.
The following function takes care of providing that:
import yaml
import re
def readYAMLFile(fileName):
ret = {}
skip_lines=1 # Skip the first line which says "%YAML:1.0". Or replace it with "%YAML 1.0"
with open(scoreFileName) as fin:
for i in range(skip_lines):
fin.readline()
yamlFileOut = fin.read()
myRe = re.compile(r":([^ ])") # Add space after ":", if it doesn't exist. Python yaml requirement
yamlFileOut = myRe.sub(r': \1', yamlFileOut)
ret = yaml.load(yamlFileOut)
return ret
outDict = readYAMLFile("file.yaml")
NOTE: Above response is applicable only for yaml's. XML's have their own share of problems, something I haven't explored completely.
I wrote a small snippet to read and write FileStorage-compatible YAMLs in Python:
# A yaml constructor is for loading from a yaml node.
# This is taken from #misha 's answer: http://stackoverflow.com/a/15942429
def opencv_matrix_constructor(loader, node):
mapping = loader.construct_mapping(node, deep=True)
mat = np.array(mapping["data"])
mat.resize(mapping["rows"], mapping["cols"])
return mat
yaml.add_constructor(u"tag:yaml.org,2002:opencv-matrix", opencv_matrix_constructor)
# A yaml representer is for dumping structs into a yaml node.
# So for an opencv_matrix type (to be compatible with c++'s FileStorage) we save the rows, cols, type and flattened-data
def opencv_matrix_representer(dumper, mat):
mapping = {'rows': mat.shape[0], 'cols': mat.shape[1], 'dt': 'd', 'data': mat.reshape(-1).tolist()}
return dumper.represent_mapping(u"tag:yaml.org,2002:opencv-matrix", mapping)
yaml.add_representer(np.ndarray, opencv_matrix_representer)
#examples
with open('output.yaml', 'w') as f:
yaml.dump({"a matrix": np.zeros((10,10)), "another_one": np.zeros((2,4))}, f)
with open('output.yaml', 'r') as f:
print yaml.load(f)
To improve on the previous answer by #Roy_Shilkrot I added support for numpy vectors as well as matrices:
# A yaml constructor is for loading from a yaml node.
# This is taken from #misha 's answer: http://stackoverflow.com/a/15942429
def opencv_matrix_constructor(loader, node):
mapping = loader.construct_mapping(node, deep=True)
mat = np.array(mapping["data"])
if mapping["cols"] > 1:
mat.resize(mapping["rows"], mapping["cols"])
else:
mat.resize(mapping["rows"], )
return mat
yaml.add_constructor(u"tag:yaml.org,2002:opencv-matrix", opencv_matrix_constructor)
# A yaml representer is for dumping structs into a yaml node.
# So for an opencv_matrix type (to be compatible with c++'s FileStorage) we save the rows, cols, type and flattened-data
def opencv_matrix_representer(dumper, mat):
if mat.ndim > 1:
mapping = {'rows': mat.shape[0], 'cols': mat.shape[1], 'dt': 'd', 'data': mat.reshape(-1).tolist()}
else:
mapping = {'rows': mat.shape[0], 'cols': 1, 'dt': 'd', 'data': mat.tolist()}
return dumper.represent_mapping(u"tag:yaml.org,2002:opencv-matrix", mapping)
yaml.add_representer(np.ndarray, opencv_matrix_representer)
Example:
with open('output.yaml', 'w') as f:
yaml.dump({"a matrix": np.zeros((10,10)), "another_one": np.zeros((5,))}, f)
with open('output.yaml', 'r') as f:
print yaml.load(f)
Output:
a matrix: !!opencv-matrix
cols: 10
data: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0]
dt: d
rows: 10
another_one: !!opencv-matrix
cols: 1
data: [0.0, 0.0, 0.0, 0.0, 0.0]
dt: d
rows: 5
Though I could not control the order of rows, cols, dt, data.
pip install opencv-contrib-python for video support to install specific version use pip install opencv-contrib-python
I'm refactoring some code that implements a formula and I want to do it test-first, to improve my testing skills, and leave the code covered.
This particular piece of code is a formula that takes 3 parameters and returns a value. I even have some data tables with expected results for different inputs, so in theory, I could jusst type a zillion tests, just changing the input parameters and checking the results against the corresponding expected value.
But I thought there should be a better way to do it, and looking at the docs I've found Value Parameterized Tests.
So, with that I now know how to automatically create the tests for the different inputs.
But how do I get the corresponding expected result to compare it with my calculated one?
The only thing I've been able to come up with is a static lookup table and a static member in the text fixture which is an index to the lookup table and is incremented in each run. Something like this:
#include "gtest/gtest.h"
double MyFormula(double A, double B, double C)
{
return A*B - C*C; // Example. The real one is much more complex
}
class MyTest:public ::testing::TestWithParam<std::tr1::tuple<double, double, double>>
{
protected:
MyTest(){ Index++; }
virtual void SetUp()
{
m_C = std::tr1::get<0>(GetParam());
m_A = std::tr1::get<1>(GetParam());
m_B = std::tr1::get<2>(GetParam());
}
double m_A;
double m_B;
double m_C;
static double ExpectedRes[];
static int Index;
};
int MyTest::Index = -1;
double MyTest::ExpectedRes[] =
{
// C = 1
// B: 1 2 3 4 5 6 7 8 9 10
/*A = 1*/ 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0,
/*A = 2*/ 1.0, 3.0, 5.0, 7.0, 9.0, 11.0, 13.0, 15.0, 17.0, 19.0,
/*A = 3*/ 2.0, 5.0, 8.0, 11.0, 14.0, 17.0, 20.0, 23.0, 26.0, 29.0,
// C = 2
// B: 1 2 3 4 5 6 7 8 9 10
/*A = 1*/ -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0,
/*A = 2*/ -2.0, 0.0, 2.0, 4.0, 6.0, 8.0, 10.0, 12.0, 14.0, 16.0,
/*A = 3*/ -1.0, 2.0, 5.0, 8.0, 11.0, 14.0, 17.0, 20.0, 23.0, 26.0,
};
TEST_P(MyTest, TestFormula)
{
double res = MyFormula(m_A, m_B, m_C);
ASSERT_EQ(ExpectedRes[Index], res);
}
INSTANTIATE_TEST_CASE_P(TestWithParameters,
MyTest,
testing::Combine( testing::Range(1.0, 3.0), // C
testing::Range(1.0, 4.0), // A
testing::Range(1.0, 11.0) // B
));
Is this a good approach or is there any better way to get the right expected result for each run?
Include the expected result along with the inputs. Instead of a triple of input values, make your test parameter be a 4-tuple.
class MyTest: public ::testing::TestWithParam<
std::tr1::tuple<double, double, double, double>>
{ };
TEST_P(MyTest, TestFormula)
{
double const C = std::tr1::get<0>(GetParam());
double const A = std::tr1::get<1>(GetParam());
double const B = std::tr1::get<2>(GetParam());
double const result = std::tr1::get<3>(GetParam());
ASSERT_EQ(result, MyFormula(A, B, C));
}
The downside is that you won't be able to keep your test parameters concise with testing::Combine. Instead, you can use testing::Values to define each distinct 4-tuple you wish to test. You might hit the argument-count limit for Values, so you can split your instantiations, such as by putting all the C = 1 cases in one and all the C = 2 cases in another.
INSTANTIATE_TEST_CASE_P(
TestWithParametersC1, MyTest, testing::Values(
// C A B
make_tuple( 1.0, 1.0, 1.0, 0.0),
make_tuple( 1.0, 1.0, 2.0, 1.0),
make_tuple( 1.0, 1.0, 3.0, 2.0),
// ...
));
INSTANTIATE_TEST_CASE_P(
TestWithParametersC2, MyTest, testing::Values(
// C A B
make_tuple( 2.0, 1.0, 1.0, -3.0),
make_tuple( 2.0, 1.0, 2.0, -2.0),
make_tuple( 2.0, 1.0, 3.0, -1.0),
// ...
));
Or you can put all the values in an array separate from your instantiation and then use testing::ValuesIn:
std::tr1::tuple<double, double, double, double> const FormulaTable[] = {
// C A B
make_tuple( 1.0, 1.0, 1.0, 0.0),
make_tuple( 1.0, 1.0, 2.0, 1.0),
make_tuple( 1.0, 1.0, 3.0, 2.0),
// ...
make_tuple( 2.0, 1.0, 1.0, -3.0),
make_tuple( 2.0, 1.0, 2.0, -2.0),
make_tuple( 2.0, 1.0, 3.0, -1.0),
// ...
};
INSTANTIATE_TEST_CASE_P(
TestWithParameters, MyTest, ::testing::ValuesIn(FormulaTable));
See hard coding the expected result is like you are limiting again the no of test cases. If you want to get a complete data driven model, I would rather suggest you to read inputs, expected result from a flat file/xml/xls file.
I don't have much experience with unit testing, but as a mathematician, I think there is not a lot more you could do.
If you would know some invariants of your formula, you could test for them, but i think that does only make sense in very few scenarios.
As an example, if you would want to test, if you have correctly implemented the natural exponential function, you could make use of the knowledge, that it's derivative should have the same value as the function itself. You could then calculate a numerical approximation to the derivative for a million points and see if they are close to the actual function value.