I am new to C++ NURBS libary. I learnt generating line (by CLine, from nurbs.h ) and save it as igs. But in case of
multiple control points, how to generate a curve ? Every other tutorial using graphics.h
(putpixel), but couldnt find anything about igs.
This should be a simple problem. But I have no idea which function can help me here.
Thanks in advance.
We have 4 control points here to begin with.
for (float t = 0.0; t <= 1.0; t += 0.2) {
double xt = 0.0, yt = 0.0;
xt = pow(1 - t, 3) * x[0] + 3 * t * pow(1 - t, 2) * x[1] + 3 * pow(t, 2) * (1 - t) * x[2]
+ pow(t, 3) * x[3];
yt = pow(1 - t, 3) * y[0] + 3 * t * pow(1 - t, 2) * y[1] + 3 * pow(t, 2) * (1 - t) * y[2]
+ pow(t, 3) * y[3];
count = count + 1;
//Math::Vector4f c(xt, yt, 0);
for (int i = 1; i < 3; i++) {
listt[i][0]= xt;
listt[i][1]= yt;
Math::Vector4f a(listt[i][0], listt[i][1],0);
myvector.push_back (&a);
}
}
......
.....
igs.Write("test.igs");
--- This is to create the points, but after that I dont know how to use the points to create a Bezier curve .
I am trying to make armgax function in C++
For example,
C = wage * hour * shock + t + a * R + (1 - delta) * d - d_f - a_f - fx1 * (1 - delta) * d;
double result;
result = (1 / (1 - sig)) * pow(pow(C, psi) * pow(d, 1 - psi), (1 - sig));
Suppose every variable except for 'd_f' and 'a_f' are given and both 'd_f' and 'a_f' some number(double) in [0, 24].
If I want to get the combination of 'd_f' and 'a_f' that maximizes 'result', is there a proper function that I can use?
Thanks.
In my book, it asks me the following question This is for compsci-1
What is wrong with this version of the quadratic formula?
x1 = (-b - sqrt(b * b - 4 * a * c)) / 2 * a;
x2 = (-b + sqrt(b * b - 4 * a * c)) / 2 * a;
The equation your code is translating is:
which of course is not the solution for quadratic equations. You want a solution for this equation:
What's the difference? In the first one you compute the numerator, then you divide by two, then you multiply by a. That's what your code is doing. In the second one you compute the numerator, then you compute the denominator, finally you divide them.
So with additional variables:
num1 = -b - sqrt(b * b - 4 * a * c);
num2 = -b + sqrt(b * b - 4 * a * c);
den = 2 * a;
x1 = num1 / den;
x2 = num2 / den;
which can of course be written as:
x1 = (-b - sqrt(b * b - 4 * a * c)) / (2 * a);
x2 = (-b + sqrt(b * b - 4 * a * c)) / (2 * a);
Where you have to plug in those parenthesis in order to force the denominator to be computed before the division. As suggested in the comment by #atru.
I am trying to implement a gradient descent algorithm for simple linear regression. For some reason it doesn't seem to be working.
from __future__ import division
import random
def error(x_i,z_i, theta0,theta1):
return z_i - theta0 - theta1 * x_i
def squared_error(x_i,z_i,theta0,theta1):
return error(x_i,z_i,theta0,theta1)**2
def mse_fn(x, z, theta0,theta1):
m = 2 * len(x)
return sum(squared_error(x_i,z_i,theta0,theta1) for x_i,z_i in zip(x,z)) / m
def mse_gradient(x, z, theta0,theta1):
m = 2 * len(x)
grad_0 = sum(error(x_i,z_i,theta0,theta1) for x_i,z_i in zip(x,z)) / m
grad_1 = sum(error(x_i,z_i,theta0,theta1) * x_i for x_i,z_i in zip(x,z)) / m
return grad_0, grad_1
def minimize_batch(x, z, mse_fn, mse_gradient_fn, theta0,theta1,tolerance=0.000001):
step_sizes = 0.01
theta0 = theta0
theta1 = theta1
value = mse_fn(x,z,theta0,theta1)
while True:
grad_0, grad_1 = mse_gradient(x,z,theta0,theta1)
next_theta0 = theta0 - step_sizes * grad_0
next_theta1 = theta1 - step_sizes * grad_1
next_value = mse_fn(x,z,next_theta0,theta1)
if abs(value - next_value) < tolerance:
return theta0, theta1
else:
theta0, theta1, value = next_theta0, next_theta1, next_value
#The data
x = [i + 1 for i in range(40)]
y = [random.randrange(1,30) for i in range(40)]
z = [2*x_i + y_i + (y_i/7) for x_i,y_i in zip(x,y)]
theta0, theta1 = [random.randint(-10,10) for i in range(2)]
q = minimize_batch(x,z,mse_fn, mse_gradient, theta0,theta1,tolerance=0.000001)
When I run I get the following error:
return error(x_i,z_i,theta0,theta1)**2 OverflowError: (34, 'Result too large')
I am working on a small Math lib for 3D graphics.
I'm not sure what costs more to the CPU/GPU in terms of time.
Right now I am using this to multiply matrix (4x4)
tmpM.p[0][0] = matA.p[0][0] * matB.p[0][0] + matA.p[0][1] * matB.p[1][0] + matA.p[0][2] * matB.p[2][0] + matA.p[0][3] * matB.p[3][0];
tmpM.p[0][1] = matA.p[0][0] * matB.p[0][1] + matA.p[0][1] * matB.p[1][1] + matA.p[0][2] * matB.p[2][1] + matA.p[0][3] * matB.p[3][1];
tmpM.p[0][2] = matA.p[0][0] * matB.p[0][2] + matA.p[0][1] * matB.p[1][2] + matA.p[0][2] * matB.p[2][2] + matA.p[0][3] * matB.p[3][2];
tmpM.p[0][3] = matA.p[0][0] * matB.p[0][3] + matA.p[0][1] * matB.p[1][3] + matA.p[0][2] * matB.p[2][3] + matA.p[0][3] * matB.p[3][3];
tmpM.p[1][0] = matA.p[1][0] * matB.p[0][0] + matA.p[1][1] * matB.p[1][0] + matA.p[1][2] * matB.p[2][0] + matA.p[1][3] * matB.p[3][0];
tmpM.p[1][1] = matA.p[1][0] * matB.p[0][1] + matA.p[1][1] * matB.p[1][1] + matA.p[1][2] * matB.p[2][1] + matA.p[1][3] * matB.p[3][1];
tmpM.p[1][2] = matA.p[1][0] * matB.p[0][2] + matA.p[1][1] * matB.p[1][2] + matA.p[1][2] * matB.p[2][2] + matA.p[1][3] * matB.p[3][2];
tmpM.p[1][3] = matA.p[1][0] * matB.p[0][3] + matA.p[1][1] * matB.p[1][3] + matA.p[1][2] * matB.p[2][3] + matA.p[1][3] * matB.p[3][3];
tmpM.p[2][0] = matA.p[2][0] * matB.p[0][0] + matA.p[2][1] * matB.p[1][0] + matA.p[2][2] * matB.p[2][0] + matA.p[2][3] * matB.p[3][0];
tmpM.p[2][1] = matA.p[2][0] * matB.p[0][1] + matA.p[2][1] * matB.p[1][1] + matA.p[2][2] * matB.p[2][1] + matA.p[2][3] * matB.p[3][1];
tmpM.p[2][2] = matA.p[2][0] * matB.p[0][2] + matA.p[2][1] * matB.p[1][2] + matA.p[2][2] * matB.p[2][2] + matA.p[2][3] * matB.p[3][2];
tmpM.p[2][3] = matA.p[2][0] * matB.p[0][3] + matA.p[2][1] * matB.p[1][3] + matA.p[2][2] * matB.p[2][3] + matA.p[2][3] * matB.p[3][3];
tmpM.p[3][0] = matA.p[3][0] * matB.p[0][0] + matA.p[3][1] * matB.p[1][0] + matA.p[3][2] * matB.p[2][0] + matA.p[3][3] * matB.p[3][0];
tmpM.p[3][1] = matA.p[3][0] * matB.p[0][1] + matA.p[3][1] * matB.p[1][1] + matA.p[3][2] * matB.p[2][1] + matA.p[3][3] * matB.p[3][1];
tmpM.p[3][2] = matA.p[3][0] * matB.p[0][2] + matA.p[3][1] * matB.p[1][2] + matA.p[3][2] * matB.p[2][2] + matA.p[3][3] * matB.p[3][2];
tmpM.p[3][3] = matA.p[3][0] * matB.p[0][3] + matA.p[3][1] * matB.p[1][3] + matA.p[3][2] * matB.p[2][3] + matA.p[3][3] * matB.p[3][3];
Is this a bad/slow idea?
Will it be more efficient to use a loop?
It will mostly depend on what the compiler manages to figure out from it.
If you can't time the operation in a context similar or identical to use (which remains the best way to figure it out), then I would guess a loop with a functor (functor object or lambda) is likely the best bet for the compiler to be able to figure out a cache friendly unroll and a cheap access to the operation.
A half decent modern compiler will also most likely vectorize it on a CPU.