Related
I am creating a VST(virtual instrument) program in cpp and I have an array of structs that represent various parameters in my program:
const FloatParam_Properties FloatParamProps[NUM_FLOAT_PARAMS] =
{
//Frequency
{"BaseFreq", "Base Freq", 0.0, 20.0, 5.0, 0.6}, //0
{"FreqDelta", "Freq Delta", -20.0, 20.0, 0.0, 0.6}, //1
...
//Wave
{"OscSelect", "Wave form", 0.0, 3.0, 0.0, 1.0}, //9
//Master
{"Volume", "Volume", 0.0, 1.0, 0.1, 0.4}, //10
};
Each element in this array is a struct, when I want to access these structs I have just hardcoded the indices(e.g. doing FloatParamProps[0] to access the Base Frequency). However, I would like to use the cpp preprocessor to give these hard-coded indices a name since everything here is known at compile-time and it would be good if I didn't have to just hard-code those defines either.
What I would like to do is something like:
const FloatParam_Properties FloatParamProps[NUM_FLOAT_PARAMS] =
{
//Frequency
DEF_FLOAT_PARAM(BaseFreq, "Base Freq", 0.0, 20.0, 5.0, 0.6), //0
DEF_FLOAT_PARAM(FreqDelta, "Freq Delta", -20.0, 20.0, 0.0, 0.6), //1
...
//Wave
DEF_FLOAT_PARAM(OscSelect, "Wave form", 0.0, 3.0, 0.0, 1.0), //9
//Master
DEF_FLOAT_PARAM(Volume, "Volume", 0.0, 1.0, 0.1, 0.4), //10
};
Where DEF_FLOAT_PARAM was a macro that would then take the first argument and turn that into a pre-processor define(or maybe constexpr) by using the __COUNTER__ macro. Then, if I wanted to access the first one I could do FloatParamProps[BaseFreq], for example. The issue I'm having is that you can't have a define within a macro so I can't define BaseFreq as a constant.
I also tried doing something like
#define DEF_FLOAT_PARAM(ID, Name, minVal, maxVal, defaultVal, skewFactor) P_##ID __COUNTER__ \
{#ID, Name, minVal, maxVal, defaultVal, skewFactor},
and the plan here was to try and take the define outside of the macro and just type that manually like so:
const FloatParam_Properties FloatParamProps[NUM_FLOAT_PARAMS] =
{
//Frequency
#define DEF_FLOAT_PARAM(BaseFreq, "Base Freq", 0.0, 20.0, 5.0, 0.6), //0
...
But the issue with that is that the pre-processor doesn't want to expand that macro when it's in-front of the define. If only I could tell it to expand the macro, it would work.
Does anyone know how I could do this?
Thanks.
Use x-macros:
#define PARAMS(X) \
X(BaseFreq, "Base Freq", 0.0, 20.0, 5.0, 0.6) \
X(FreqDelta, "Freq Delta", -20.0, 20.0, 0.0, 0.6) \
X(OscSelect, "Wave form", 0.0, 3.0, 0.0, 1.0)
const FloatParam_Properties FloatParamProps[NUM_FLOAT_PARAMS] =
{
#define PARAM_STRUCT(id, ...) {#id, __VA_ARGS__},
PARAMS(PARAM_STRUCT)
#undef PARAM_STRUCT
};
enum Params
{
#define PARAM_ENUM(id, ...) id,
PARAMS(PARAM_ENUM)
#define PARAM_ENUM
_count,
};
This will generate the same array you already have, and a enum with the constants matching the array indices.
I want to do smooth transitions between different colors(rather than just toggling it) by pressing the keyboard key 't'.
Below is my code which toggles the colors at once but i want a smooth transitions of color:
case 't':
// code for color transition
changeColor += 1;
if(changeColor>8) //Toggling between 9 different colors
changeColor=0;
break;
Color storing code:
GLfloat diffColors[9][4] = { {0.3, 0.8, 0.9, 1.0},
{1, 0, 0, 1},
{0, 1, 0, 1},
{0, 0, 1, 1},
{0.5, 0.5, 0.9, 1},
{0.2, 0.5, 0.5, 1},
{0.5, 0.5, 0.9, 1},
{0.9, 0.5, 0.5, 1},
{0.3, 0.8, 0.5, 1} };
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, diffColors[changeColor]);
Change the changeColor parameter to float and instead of increment by 1 add some small value like 0.1 or smaller depends on how quick you want to change the colors and how often your event is firing.
case 't':
// code for color transition
changeColor += 0.025;
break;
Use linear interpolation to compute the color based on parameter changeColor.
//---------------------------------------------------------------------------
GLfloat diffColors[9][4] =
{
{0.3, 0.8, 0.9, 1.0},
{1.0, 0.0, 0.0, 1.0},
{0.0, 1.0, 0.0, 1.0},
{0.0, 0.0, 1.0, 1.0},
{0.5, 0.5, 0.9, 1.0},
{0.2, 0.5, 0.5, 1.0},
{0.5, 0.5, 0.9, 1.0},
{0.9, 0.5, 0.5, 1.0},
{0.3, 0.8, 0.5, 1.0}
};
GLfloat changeColor=0.0; // This must be float !!!
//---------------------------------------------------------------------------
void set_color()
{
int i;
const int N=9; // colors in your table
const GLfloat n=N+1.0; // colors in your table + 1
float *c0,*c1,c[4],t;
// bound the parameter
t=changeColor; // I renamed it so I do nto need to write too much)
while (t>= n) t-=n;
while (t<0.0) t+=n;
i=floor(t);
changeColor=t; // update parameter
t-=i; // leave just the decimal part
// get neighboring colors to t
c0=diffColors[i]; i++; if (i>=N) i=0;
c1=diffColors[i];
// interpolate
for (i=0;i<4;i++) c[i]=c0[i]+((c1[i]-c0[i])*t);
//glColor4fv(c);
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, c);
}
//---------------------------------------------------------------------------
so the idea is to dissect the changeColor to integer and fractional/decimal part. The integer part tells you between which 2 colors in your table you are and the fractional part <0..1> tells how far from the one color to the other one.
Linear interpolation of value x between 2 values x0,x1 and parameter t=<0..1> is like this:
x = x0 + (x1-x0)*t
If you look at the code above it does the same for c,c0,c1,t... In order this to get working the first chunk of code where you add to the parameter starting with case 't': must be executed repetitively like in timer ... and also invoke rendering. If it is just in some onkey handler that is called only once per key hit (no autorepeat) then it will not work and you need to implement the addition in some timer or on redraw event if you continuously redrawing screen... Again if not even that is happening you need to change the architecture of your app.
So this is how I solved it.
case 't':
// code for color transitioncol
changeColor=8; //I am doing the color transition at 9th number color
if(initialValue>=1.0)
initialValue=0.1;
initialValue+=0.01;
break;
Color storing code:
GLfloat diffColors[9][4] = { {initialValue, 0.5, 0.9, 1.0},
{initialValue, 1.0, 0.0, 0.0},
{initialValue, 0.0, 1.0, 0.0},
{initialValue, 0.8, 0.5, 0.8},
{initialValue, 0.5, 0.5, 0.9},
{initialValue, 0.9, 0.9, 0.5},
{initialValue, 0.5, 0.7, 0.9},
{initialValue, 0.9, 0.5, 0.5},
{initialValue, 0.7, 0.3, 0.5}};
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, diffColors[changeColor]);
I'm getting an error when I try to send a matrix into a proc. I'm pretty sure I'm doing something very wrong, can't figure it out.
use LinearAlgebra;
proc main() {
var A = Matrix(
[0.0, 0.8, 1.1, 0.0, 2.0]
,[0.8, 0.0, 1.3, 1.0, 0.0]
,[1.1, 1.3, 0.0, 0.5, 1.7]
,[0.0, 1.0, 0.5, 0.0, 1.5]
,[2.0, 0.0, 1.7, 1.5, 0.0]
);
check_dims(A);
}
proc check_dims(A: Matrix) {
var t: bool = false;
if (A.domain.dim(1) == A.domain.dim(2)){
t = true;
}
return t;
}
Gives me
mad.chpl:3: In function 'main':
mad.chpl:14: error: unresolved call 'check_dims([domain(2,int(64),false)] real(64))'
mad.chpl:17: note: candidates are: check_dims(A: Matrix)
I'm using chpl Version 1.15.0
Linear algebra objects (like matrices and vectors) are represented as arrays in Chapel. Therefore, changing Matrix (a type that does not exist) to [] (the syntax for array-type) should work as expected:
use LinearAlgebra;
proc main() {
var A = Matrix(
[0.0, 0.8, 1.1, 0.0, 2.0]
,[0.8, 0.0, 1.3, 1.0, 0.0]
,[1.1, 1.3, 0.0, 0.5, 1.7]
,[0.0, 1.0, 0.5, 0.0, 1.5]
,[2.0, 0.0, 1.7, 1.5, 0.0]
);
check_dims(A);
}
proc check_dims(A: []) {
var t: bool = false;
// method is dim()
if (A.domain.dim(1) == A.domain.dim(2)){
t = true;
}
return t;
}
Ok, so I have a model class that contains a pointer to (what will be) an array of point3 objects:
point3* _vertices_colors;
Point3 has the following typedef:
typedef GLfloat point3[3];
Essentially making an array of point3 objects an array of arrays. Then in a derived classes' constructor, I allocate memory for the number of vertices and colors I want to store as follows:
_vertices_colors = new point3[16];
This means my object has 8 vertices with their own colors stored. I then define the following array on stack ready to copy to the pointer:
point3 verticesColors[] = {
{1.0, 1.0, 1.0}, {1.0, 0.0, 0.0},
{-1.0, 1.0, 1.0}, {1.0, 0.0, 0.0},
{-1.0, -1.0, 1.0},{1.0, 0.0, 0.0},
{1.0, -1.0, 1.0},{1.0, 0.0, 0.0},
{1.0, 1.0, -1.0}, {1.0, 0.0, 0.0},
{-1.0, 1.0, -1.0}, {1.0, 0.0, 0.0},
{-1.0, -1.0, -1.0},{1.0, 0.0, 0.0},
{1.0, -1.0, -1.0},{1.0, 0.0, 0.0}
};
Then, I use a for loop to copy to the array on heap:
for(int i = 0; i < 16; i++)
{
*_vertices_colors[i,0] = *verticesColors[i, 0];
*_vertices_colors[i,1] = *verticesColors[i, 1];
*_vertices_colors[i,2] = *verticesColors[i, 2];
printf("%15f", *_vertices_colors[i,0]);
printf("\t");
printf("%15f", *_vertices_colors[i,1]);
printf("\t");
printf("%15f", *_vertices_colors[i,2]);
printf("\n");
}
However, this appears to assign 1.0, 1.0, -1.0 to each of the 16 rows of the array. I've tried other ways of assigning the pointer to the array, for example the line:
_vertices_colors = verticesColors;
As verticesColors is a constant pointer to an array, I thought this would work, however it produces the same results. I also tried using memcpy:
memcpy(_vertices_colors, verticesColors, sizeof(_vertices_colors));
But this seems to produce some uncontrollable results. It assigns each of the first columns as 1.0 and the rest as very large negative integers. Can anyone see why my first method doesn't work?
This
*_vertices_colors[i,0] = *verticesColors[i, 0];
*_vertices_colors[i,1] = *verticesColors[i, 1];
*_vertices_colors[i,2] = *verticesColors[i, 2];
is equivalent to
*_vertices_colors[0] = *verticesColors[0];
*_vertices_colors[1] = *verticesColors[1];
*_vertices_colors[2] = *verticesColors[2];
You use a sequence operator , in the array subscription, which yields the last value of the sequence. In this case 0, 1 and 2.
Multi dimensional arrays are accessed as
_vertices_colors[i][0] = verticesColors[i][0];
I'm refactoring some code that implements a formula and I want to do it test-first, to improve my testing skills, and leave the code covered.
This particular piece of code is a formula that takes 3 parameters and returns a value. I even have some data tables with expected results for different inputs, so in theory, I could jusst type a zillion tests, just changing the input parameters and checking the results against the corresponding expected value.
But I thought there should be a better way to do it, and looking at the docs I've found Value Parameterized Tests.
So, with that I now know how to automatically create the tests for the different inputs.
But how do I get the corresponding expected result to compare it with my calculated one?
The only thing I've been able to come up with is a static lookup table and a static member in the text fixture which is an index to the lookup table and is incremented in each run. Something like this:
#include "gtest/gtest.h"
double MyFormula(double A, double B, double C)
{
return A*B - C*C; // Example. The real one is much more complex
}
class MyTest:public ::testing::TestWithParam<std::tr1::tuple<double, double, double>>
{
protected:
MyTest(){ Index++; }
virtual void SetUp()
{
m_C = std::tr1::get<0>(GetParam());
m_A = std::tr1::get<1>(GetParam());
m_B = std::tr1::get<2>(GetParam());
}
double m_A;
double m_B;
double m_C;
static double ExpectedRes[];
static int Index;
};
int MyTest::Index = -1;
double MyTest::ExpectedRes[] =
{
// C = 1
// B: 1 2 3 4 5 6 7 8 9 10
/*A = 1*/ 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0,
/*A = 2*/ 1.0, 3.0, 5.0, 7.0, 9.0, 11.0, 13.0, 15.0, 17.0, 19.0,
/*A = 3*/ 2.0, 5.0, 8.0, 11.0, 14.0, 17.0, 20.0, 23.0, 26.0, 29.0,
// C = 2
// B: 1 2 3 4 5 6 7 8 9 10
/*A = 1*/ -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0,
/*A = 2*/ -2.0, 0.0, 2.0, 4.0, 6.0, 8.0, 10.0, 12.0, 14.0, 16.0,
/*A = 3*/ -1.0, 2.0, 5.0, 8.0, 11.0, 14.0, 17.0, 20.0, 23.0, 26.0,
};
TEST_P(MyTest, TestFormula)
{
double res = MyFormula(m_A, m_B, m_C);
ASSERT_EQ(ExpectedRes[Index], res);
}
INSTANTIATE_TEST_CASE_P(TestWithParameters,
MyTest,
testing::Combine( testing::Range(1.0, 3.0), // C
testing::Range(1.0, 4.0), // A
testing::Range(1.0, 11.0) // B
));
Is this a good approach or is there any better way to get the right expected result for each run?
Include the expected result along with the inputs. Instead of a triple of input values, make your test parameter be a 4-tuple.
class MyTest: public ::testing::TestWithParam<
std::tr1::tuple<double, double, double, double>>
{ };
TEST_P(MyTest, TestFormula)
{
double const C = std::tr1::get<0>(GetParam());
double const A = std::tr1::get<1>(GetParam());
double const B = std::tr1::get<2>(GetParam());
double const result = std::tr1::get<3>(GetParam());
ASSERT_EQ(result, MyFormula(A, B, C));
}
The downside is that you won't be able to keep your test parameters concise with testing::Combine. Instead, you can use testing::Values to define each distinct 4-tuple you wish to test. You might hit the argument-count limit for Values, so you can split your instantiations, such as by putting all the C = 1 cases in one and all the C = 2 cases in another.
INSTANTIATE_TEST_CASE_P(
TestWithParametersC1, MyTest, testing::Values(
// C A B
make_tuple( 1.0, 1.0, 1.0, 0.0),
make_tuple( 1.0, 1.0, 2.0, 1.0),
make_tuple( 1.0, 1.0, 3.0, 2.0),
// ...
));
INSTANTIATE_TEST_CASE_P(
TestWithParametersC2, MyTest, testing::Values(
// C A B
make_tuple( 2.0, 1.0, 1.0, -3.0),
make_tuple( 2.0, 1.0, 2.0, -2.0),
make_tuple( 2.0, 1.0, 3.0, -1.0),
// ...
));
Or you can put all the values in an array separate from your instantiation and then use testing::ValuesIn:
std::tr1::tuple<double, double, double, double> const FormulaTable[] = {
// C A B
make_tuple( 1.0, 1.0, 1.0, 0.0),
make_tuple( 1.0, 1.0, 2.0, 1.0),
make_tuple( 1.0, 1.0, 3.0, 2.0),
// ...
make_tuple( 2.0, 1.0, 1.0, -3.0),
make_tuple( 2.0, 1.0, 2.0, -2.0),
make_tuple( 2.0, 1.0, 3.0, -1.0),
// ...
};
INSTANTIATE_TEST_CASE_P(
TestWithParameters, MyTest, ::testing::ValuesIn(FormulaTable));
See hard coding the expected result is like you are limiting again the no of test cases. If you want to get a complete data driven model, I would rather suggest you to read inputs, expected result from a flat file/xml/xls file.
I don't have much experience with unit testing, but as a mathematician, I think there is not a lot more you could do.
If you would know some invariants of your formula, you could test for them, but i think that does only make sense in very few scenarios.
As an example, if you would want to test, if you have correctly implemented the natural exponential function, you could make use of the knowledge, that it's derivative should have the same value as the function itself. You could then calculate a numerical approximation to the derivative for a million points and see if they are close to the actual function value.