Python process increasing memory RAM usage - python-2.7

I have been using python some months and this problem just appear two or three days ago.
At this moment I am running this code in the IDLE 3.4.3 python GUI :
x = [1,2,3,4,5]
for i in x:
x.append((i * (i + 1))/2)
print(x)
But this hasnt output or error(is not the only one), the console just is opened and its waiting(like thinking)
and then i check the process in task admin and i see that the process start in 30-35 mb
and after One or Two minutes the process is consuming:
I have not installed new software or something with the OS, this happen in two different laptops(W7 and W10), could be this code, I know(its works if I create a new empty list) but what about other simple instructions like 1+1
I have tried with diferent IDEs, and python versions including architecture.
First I was using iPython notebook and Spyder because I need to plot and Anaconda comes with everything ready, but the kernel always said Busy and without output; I restart, interrupt, new kernel but doesnt work and this happen just this week, because i was working perfectly so I had to remove it because this is happening.
Someone has idea what is happening?

To see what is happening, let me modify your code slightly
x = [1,2,3,4,5]
for i in x:
x.append((i * (i + 1))/2)
print(x)
if len(x) > 20:
break
print(x)
The output looks like this
[1, 2, 3, 4, 5, 1.0]
[1, 2, 3, 4, 5, 1.0, 3.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0, 1.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0, 1.0, 6.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0, 1.0, 6.0, 21.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0, 1.0, 6.0, 21.0, 55.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0, 1.0, 6.0, 21.0, 55.0, 120.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0, 1.0, 6.0, 21.0, 55.0, 120.0, 1.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0, 1.0, 6.0, 21.0, 55.0, 120.0, 1.0, 21.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0, 1.0, 6.0, 21.0, 55.0, 120.0, 1.0, 21.0, 231.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0, 1.0, 6.0, 21.0, 55.0, 120.0, 1.0, 21.0, 231.0, 1540.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0, 1.0, 6.0, 21.0, 55.0, 120.0, 1.0, 21.0, 231.0, 1540.0, 7260.0]
[1, 2, 3, 4, 5, 1.0, 3.0, 6.0, 10.0, 15.0, 1.0, 6.0, 21.0, 55.0, 120.0, 1.0, 21.0, 231.0, 1540.0, 7260.0, 1.0]
append simply adds a new value to the end of a list, so what you've done is created an infinite loop where x is just going to grow bigger and bigger. Basically every time the iterator draws a number from x a new number is added to the end of x, so the iterator will always have 5 more numbers to draw before it completes.
As the iterator keeps running, x keeps getting bigger and bigger and consuming more and more memory.
To fix this infinite loop you can either store the results in a different list or have a break condition like I used above if you really want the results to be appended to x. Also, if you are meaning to replace the values in x you can do something like this
x = [1,2,3,4,5]
for i, v in enumerate(x):
x[i] = (v * (v + 1))/2
print(x)
which outputs
[1.0, 3.0, 6.0, 10.0, 15.0]
Hope that helps.

Related

Numpy divide doesn't return floats

When I run:
np.divide(np.array([0, 1, 2, 3, 4]),np.array([2, 2, 4, 4, 4]))
OR
np.array([0, 1, 2, 3, 4])/np.array([2, 2, 4, 4, 4])
OR
np.true_divide(np.array([0, 1, 2, 3, 4]),np.array([2, 2, 4, 4, 4]))
The output I get:
array([0., 0., 0., 1., 1.])
Even when the numbers are specified as floats like [0.0, 1.0, 2.0, 3.0, 4.0], the result is the same.
Expected output:
array([0., 0.5, 0.5, 0.75, 1.0])
I am unable to understand why the result is the way it is.
The print precision was set to precision=0.
Took me a while to figure that out!
Got fixed when I did:
np.set_printoptions(precision=2)

why does pyrr.Matrix44 translation appear to be column-major, and rotation row-major?

consider the following:
>>>Matrix44.from_translation( np.array([1,2,3]))
Matrix44([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[1, 2, 3, 1]])
>>> Matrix44.from_x_rotation(0.5 * np.pi)
Matrix44([[ 1.0, 0.0, 0.0, 0.0],
[ 0.0, 0.0, -1.0, 0.0],
[ 0.0, 1.0, 0.0, 0.0],
[ 0.0, 0.0, 0.0, 1.0]])
The translation matrix shows that the layout of the matrix is column-major, but the rotation matrix, confusingly, suggests that it is row-major, if you consider that the standard right-hand 3x3 rotation matrix around X in row-major notation reads:
0.0 0.0 0.0
0.0 cos(a) -sin(a)
0.0 sin(a) cos(a)
As seems to be the result returned by from_x_rotation.
Does anyone know if this is a bug, or am I misinterpreting something?

OpenGl - Evaluators and Normals

I'm trying to use evaluators to create a plane:
void Plane::draw(float texS, float texT)
{
float div = v.at(0);
GLfloat ctrlpoints[4][3] = {
{-0.5, 0.0, 0.5}, {-0.5, 0.0 ,-0.5},
{0.5, 0.0, 0.5}, {0.5, 0.0, -0.5}};
GLfloat texturepoints[4][2] = {
{0.0, 0.0}, {0.0, 1.0/texT},
{1.0/texS, 0.0}, {1.0/texS, 1.0/texT}};
glMap2f(GL_MAP2_VERTEX_3, 0.0, 1.0, 3, 2, 0.0, 1.0, 2 * 3, 2, &ctrlpoints[0][0]);
glMap2f(GL_MAP2_TEXTURE_COORD_2, 0.0, 1.0, 2, 2, 0.0, 1.0, 2 * 2, 2, &texturepoints[0][0]);
glEnable(GL_MAP2_VERTEX_3);
glEnable(GL_MAP2_TEXTURE_COORD_2);
glEnable(GL_AUTO_NORMAL);
glMapGrid2f(div, 0.0, 1.0, div, 0.0, 1.0);
glEvalMesh2(GL_FILL,0, div, 0, div);
}
It displays the plane correctly, it gives me a 50*50 grid, for example, and the texture I apply to it is also displayed properly. However, if I try to apply a golden appearance to it, it just gives me a dull brown color.
I know I can get what I want by creating a rectangle with with quad or triangle strip, but the point here is to use evaluators.
One answer I found said that evaluators calculate normals automatically with the enabling of GL_AUTO_NORMAL, and that that was the only necessary instruction. But even then, the author of the question couldn't do what he wanted.
And I do have GL_NORMALIZE enabled in the initialization.

OpenGL calculating Normal of a custom shape

i have a shape with the following vertexes and faces:
static Vec3f cubeVerts[24] = {
{ -0.5, 0.5, -0.5 }, /* backside */
{ -0.5, -0.5, -0.5 },
{ -0.3, 4.0, -0.5 },
{ -0.3, 3.0, -0.5 },
{ -0.1, 5.5, -0.5 },
{ -0.1, 4.5, -0.5 },
{ 0.1, 5.5, -0.5 },
{ 0.1, 4.5, -0.5 },
{ 0.3, 4.0, -0.5 },
{ 0.3, 3.0, -0.5 },
{ 0.5, 0.5, -0.5 },
{ 0.5, -0.5, -0.5 },
{ -0.5, 0.5, 0.5 }, /* frontside */
{ -0.5, -0.5, 0.5 },
{ -0.3, 4.0, 0.5 },
{ -0.3, 3.0, 0.5 },
{ -0.1, 5.5, 0.5 },
{ -0.1, 4.5, 0.5 },
{ 0.1, 5.5, 0.5 },
{ 0.1, 4.5, 0.5 },
{ 0.3, 4.0, 0.5 },
{ 0.3, 3.0, 0.5 },
{ 0.5, 0.5, 0.5 },
{ 0.5, -0.5, 0.5 }
};
static GLuint cubeFaces[] = {
0, 1, 3, 2, /*backfaces*/
2, 3, 5, 4,
4, 5, 7, 6,
6, 7, 9, 8,
8, 9, 11, 10,
12, 13, 15, 14, /*frontfaces*/
14, 15, 17, 16,
16, 17, 19, 18,
18, 19, 21, 20,
20, 21, 23, 22,
0, 2, 14, 12, /*topfaces*/
2, 4, 16, 14,
4, 6, 18, 16,
6, 8, 20, 18,
8, 10, 22, 20,
1, 3, 15, 13, /*bottomfaces*/
3, 5, 17, 15,
5, 7, 19, 17,
7, 9, 21, 19,
9, 11, 23, 21,
0, 1, 13, 12, /*sidefaces*/
10, 11, 23, 22
};
and i want to get its normal like this:
static Vec3f cubeNorms[] = {
{ 0, 1, 0 },
{ 0, 1, 0 },
{ 0, 1, 0 },
{ 0, 1, 0 }
};
Can someone tell me how to calculate its normal and putting it inside an array so i can use all these together like this, i know something is wrong with my normal, because lighting on my shape is not right and i am also not sure if its the right way of setting up the normal, just one example is fine, ive been reading heaps of normal calculations and still can't figure out how to do it.
static void drawCube()
{
//vertexes
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, cubeVerts);
//norms
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, cubeNorms);
//faces
glDrawElements(GL_QUADS, 22 * 4, GL_UNSIGNED_INT, cubeFaces);
}
I'm going to assume your faces are counter-clockwise front-facing - I don't know if that's the case - and the quads are, of course, convex and planar.
For a face, take vertices {0, 1, 2}. I don't know the Vec3f specification (or if it's a class or C struct), but we can find the normal for all vertices in the quad with:
Vec3f va = v0 - v1; // quad vertex 1 -> 0
Vec3f vb = v2 - v1; // quad vertex 1 -> 2
Vec3f norm = cross(vb, va); // cross product.
float norm_len = sqrt(dot(norm, norm));
norm /= norm_len; // divide each component of norm by norm_len.
That gives you a unit normal for that face. If vertices are shared, and you want to give the model the perception of curvature using lighting, you'll have to decide what value of the normal should be 'agreed' upon. Perhaps the best starting point is to simply take an average of the face normals at that vertex - and rescale the result to unit length as required.

How to test multi-parameter formula

I'm refactoring some code that implements a formula and I want to do it test-first, to improve my testing skills, and leave the code covered.
This particular piece of code is a formula that takes 3 parameters and returns a value. I even have some data tables with expected results for different inputs, so in theory, I could jusst type a zillion tests, just changing the input parameters and checking the results against the corresponding expected value.
But I thought there should be a better way to do it, and looking at the docs I've found Value Parameterized Tests.
So, with that I now know how to automatically create the tests for the different inputs.
But how do I get the corresponding expected result to compare it with my calculated one?
The only thing I've been able to come up with is a static lookup table and a static member in the text fixture which is an index to the lookup table and is incremented in each run. Something like this:
#include "gtest/gtest.h"
double MyFormula(double A, double B, double C)
{
return A*B - C*C; // Example. The real one is much more complex
}
class MyTest:public ::testing::TestWithParam<std::tr1::tuple<double, double, double>>
{
protected:
MyTest(){ Index++; }
virtual void SetUp()
{
m_C = std::tr1::get<0>(GetParam());
m_A = std::tr1::get<1>(GetParam());
m_B = std::tr1::get<2>(GetParam());
}
double m_A;
double m_B;
double m_C;
static double ExpectedRes[];
static int Index;
};
int MyTest::Index = -1;
double MyTest::ExpectedRes[] =
{
// C = 1
// B: 1 2 3 4 5 6 7 8 9 10
/*A = 1*/ 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0,
/*A = 2*/ 1.0, 3.0, 5.0, 7.0, 9.0, 11.0, 13.0, 15.0, 17.0, 19.0,
/*A = 3*/ 2.0, 5.0, 8.0, 11.0, 14.0, 17.0, 20.0, 23.0, 26.0, 29.0,
// C = 2
// B: 1 2 3 4 5 6 7 8 9 10
/*A = 1*/ -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0,
/*A = 2*/ -2.0, 0.0, 2.0, 4.0, 6.0, 8.0, 10.0, 12.0, 14.0, 16.0,
/*A = 3*/ -1.0, 2.0, 5.0, 8.0, 11.0, 14.0, 17.0, 20.0, 23.0, 26.0,
};
TEST_P(MyTest, TestFormula)
{
double res = MyFormula(m_A, m_B, m_C);
ASSERT_EQ(ExpectedRes[Index], res);
}
INSTANTIATE_TEST_CASE_P(TestWithParameters,
MyTest,
testing::Combine( testing::Range(1.0, 3.0), // C
testing::Range(1.0, 4.0), // A
testing::Range(1.0, 11.0) // B
));
Is this a good approach or is there any better way to get the right expected result for each run?
Include the expected result along with the inputs. Instead of a triple of input values, make your test parameter be a 4-tuple.
class MyTest: public ::testing::TestWithParam<
std::tr1::tuple<double, double, double, double>>
{ };
TEST_P(MyTest, TestFormula)
{
double const C = std::tr1::get<0>(GetParam());
double const A = std::tr1::get<1>(GetParam());
double const B = std::tr1::get<2>(GetParam());
double const result = std::tr1::get<3>(GetParam());
ASSERT_EQ(result, MyFormula(A, B, C));
}
The downside is that you won't be able to keep your test parameters concise with testing::Combine. Instead, you can use testing::Values to define each distinct 4-tuple you wish to test. You might hit the argument-count limit for Values, so you can split your instantiations, such as by putting all the C = 1 cases in one and all the C = 2 cases in another.
INSTANTIATE_TEST_CASE_P(
TestWithParametersC1, MyTest, testing::Values(
// C A B
make_tuple( 1.0, 1.0, 1.0, 0.0),
make_tuple( 1.0, 1.0, 2.0, 1.0),
make_tuple( 1.0, 1.0, 3.0, 2.0),
// ...
));
INSTANTIATE_TEST_CASE_P(
TestWithParametersC2, MyTest, testing::Values(
// C A B
make_tuple( 2.0, 1.0, 1.0, -3.0),
make_tuple( 2.0, 1.0, 2.0, -2.0),
make_tuple( 2.0, 1.0, 3.0, -1.0),
// ...
));
Or you can put all the values in an array separate from your instantiation and then use testing::ValuesIn:
std::tr1::tuple<double, double, double, double> const FormulaTable[] = {
// C A B
make_tuple( 1.0, 1.0, 1.0, 0.0),
make_tuple( 1.0, 1.0, 2.0, 1.0),
make_tuple( 1.0, 1.0, 3.0, 2.0),
// ...
make_tuple( 2.0, 1.0, 1.0, -3.0),
make_tuple( 2.0, 1.0, 2.0, -2.0),
make_tuple( 2.0, 1.0, 3.0, -1.0),
// ...
};
INSTANTIATE_TEST_CASE_P(
TestWithParameters, MyTest, ::testing::ValuesIn(FormulaTable));
See hard coding the expected result is like you are limiting again the no of test cases. If you want to get a complete data driven model, I would rather suggest you to read inputs, expected result from a flat file/xml/xls file.
I don't have much experience with unit testing, but as a mathematician, I think there is not a lot more you could do.
If you would know some invariants of your formula, you could test for them, but i think that does only make sense in very few scenarios.
As an example, if you would want to test, if you have correctly implemented the natural exponential function, you could make use of the knowledge, that it's derivative should have the same value as the function itself. You could then calculate a numerical approximation to the derivative for a million points and see if they are close to the actual function value.