I'm trying to code a multinomial algorithm that will basically apply a binomial distribution to each value of an input vector, knowing the values of all previous ones. It's aimed here to generate a new population for multiple alleles knowing the initial population.
To achieve this, I'm using this recursive algorithm :
This is what my code looks like right now :
void RandomNumbers::multinomial(std::vector<unsigned int>& alleleNumbers) {
/* In this function we need two different records of the size.
* We need the size from the old populations, ( N - n1 - ... - nA )
* and we also need the size from the newly created population,
* ( N - k1 - ... - kA ).
* In order to achieve such a task, we'll use the integer "temp" to store
* the value n1 before modifying it to k1 and so on.
*
*
*/
double totalSize = 0;
for(auto n : alleleNumbers) totalSize+=n;
double newTotalSize(totalSize);
std::cout<< newTotalSize;
for(size_t i = 0; i < alleleNumbers.size(); ++i){
size_t temp = alleleNumbers[i];
alleleNumbers[i] = binomial(newTotalSize,
(alleleNumbers[i])/(totalSize));
newTotalSize-= alleleNumbers[i];
totalSize = temp;
}
}
But I'm not sure at all about this, and I was wondering if there was an already existing multinomial algorithm of that kind...
Thank you very much.
You could try using the GNU Scientific Library's gsl_ran_multinomial command.
The function is called as:
gsl_ran_multinomial (const gsl_rng * r, size_t K, unsigned int N, const double p[], unsigned int n[])
where (n_1, n_2, ..., n_K) are nonnegative integers with sum_{k=1}^K n_k = N, and (p_1, p_2, ..., p_K) is a probability distribution with sum(p_i) = 1. If the array p[K] is not normalized then its entries will be treated as weights and normalized appropriately. The arrays n[] and p[] must both be of length K.
The function implements the conditional binomial method from C.S. Davis's "The computer generation of multinomial random variates" (Comp. Stat. Data Anal. 16, 1993. link), so you could implement using that approach. Let me know if you need a copy of the paper.
Related
Let's say I have 3d array as flatten 1d with sizes [N, M, K]. And I want to process a slice from it like [0:N, 1:M, 0:K].
I've created a helper function that addresses underlying array by indexes from sliced array(for simplicity I only slice by second dimension).
#define N somevalue
#define M somevalue
#define K somevalue
// i is an index in sliced array so we need to translate it into original one
template<class T, int FROM>
__device__ __forceinline__ T slice(const T * const __restrict__ x, const size_t i) {
auto batch_size = (M - FROM) * K;
auto batch_index = i / batch_size;
auto offset_0 = i % batch_size;
auto offset_1 = offset_0 / STATES;
auto offset_2 = offset_0 % STATES;
return x[batch_index * M * K + (offset_1 + FROM) * K + offset_2];
}
From NVidia profiler I see that division and modulo division take a lot of computational power. Also sizes is not a power of 2, so I can't use shift bits trick directly.
What can you advise?
As I know slicing is a quite common operation in TF, so how do they solved it?
Cuda is about coalesced memory access and simd. And arbitrary slices are the exact opposite. So the answer as usual: it depends.
If your offset is and remains 1, change you memory layout towards M N K. If the ignored entries are really really sparse, go with the traditional way and just idle a few threads (yes this hurts, but some threadIdx calc without modulo might be faster). Otherwise, you will need to compute this bijective mapping thread/block-id to element-id like you wrote in your question.
There are some ways to represent modulo by some other operations. But it's commonly better to invest time in improving other parts of the kernel.
I'm trying to implement an algorithm for generating graphs following Barabási-Albert (BA) model. Under this model, the degree distribution follows a power-law:
P(k) ~ k^-λ
Where the exponent λ should equal 3.
For simplicity, I will focus here on the R code, where I'm using igraph functions. However I get networks with λ != 3. It seems that this has been a topic extensively covered (example question 1, eq2, eq3), but I haven't been able to find a satisfactory solution.
In R I use igraph:::sample_pa function to generate a graph following the BA model. In the reproducible example below, I set
# Initialize
set.seed(1234)
order = 100
v_degrees = vector()
for (i in 1:10000) {
g <- sample_pa(order, power=3, m=8)
# Get degree distribution
d = degree(g, mode="all")
dd = degree_distribution(g, mode="all", cumulative=FALSE)
d = 1:max(d)
probability = dd[-1]
nonzero.position = which(probability !=0)
probability = probability[nonzero.position]
d = d[nonzero.position]
# Fit power law distribution and get gamma exponent
reg = lm (log(probability) ~ log(d))
cozf = coef(reg)
power.law.fit = function(x) exp(cozf[[1]] + cozf[[2]] * log(x))
gamma = -cozf[[2]]
v_degrees[i] = gamma
}
The graph seems to be scale free in fact, giving gamma=0.72±0.21 with order 100 and gamma=0.68±0.24 for order 10,000, and similar results varying the parameter m. But the exponent is clearly different from the expected gamma=3.
In fact I was trying to implement this model on a different language (C++, see code below), but I get similar results with exponents lower than 3. So I wonder if this is a common misunderstanding on the BA model or there's something wrong in the previous calculations fitting the power law distribution, of it contrarily to what is commonly expected this is the normal behavior of the BA model.
In case someone is interested or is more familiarized with C++, see appendix below.
Appendix: C++ code
For understanding the code below, assume an object class Graph, and a connect function that created an edge between two vertices passed as argument. Below I give code of two relevant functions BA_step and build_BA.
BA_step
void Graph::BA_step (int ID, int m, std::vector<double>& freqs) {
std::vector<int> connect_history;
vertices.push_back(ID);
// Connect node ID to a random node i with pi ~ ki / sum kj
while (connect_history.size() < m) {
double U (sample_prob()); // gets a value in the range [0,1)
int index (freqs[freqs.size()-1]);
for (int i(0); i<freqs.size(); ++i) {
if (U<=freqs[i]/index && !is_in(connect_history, i)) { // is_in checks if i exists in connect_history
connect(ID, i);
connect_history.push_back(i);
break;
}
}
}
// Update vector of absolute edge frequencies
for (int i(0); i<connect_history.size(); ++i) {
int index (connect_history[i]);
for (int j(index); j<freqs.size(); ++j) {
++freqs[j];
}
}
freqs.push_back(m+freqs[freqs.size()-1]);
}
build_BA
void Graph::build_BA (int m0, int m) {
// Initialization
std::vector<double> cum_nedges;
std::vector<int> connect_history;
for (int ID(0); ID<m0; ++ID) {
vertices.push_back(ID);
}
// Initial BA step
vertices.push_back(m0);
for (int i(0); i<m; ++i) {
connect(m0, i);
connect_history.push_back(i);
}
cum_nedges.push_back(1);
for (int i(1); i<m; ++i) cum_nedges.push_back(cum_nedges[cum_nedges.size()-1]+1);
cum_nedges.push_back(m+m);
// BA model
for (int ID(m0+1); ID<order; ++ID) {
BA_step(ID, m, cum_nedges);
}
}
Two things might help:
sample_pa arguments to get exponent alpha = 3
Are really power = 1 and m = 1 (check definition in that wikipedia article against the igraph::sample_pa documentation---the power argument doesn't mean the degree of the power-law distribution).
Power laws are hard to estimate
Just running OLS/LM on the degree distribution gives you an exponent closer to 0 than 3 (underestimated, in other words). Instead, if you use the igraph::power_law_fit command with a high xmin, you'll get answers closer to 3. Check Aaron Clauset's page and publications for more info on estimating power laws. Really you need to estimate an optimal x-min for every degree distribution.
Here's some code that'll work a bit better:
library(igraph)
set.seed(1234)
order = 10000
v_degrees = vector()
for (i in 1:100) {
g <- sample_pa(order, power = 1, m = 1)
d <- degree(g, mode="all")
v_degrees[i] <- fit_power_law(d, ceiling(mean(d))+100) %>% .$alpha
}
v_degrees %>% summary()
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 2.646 2.806 2.864 2.873 2.939 3.120
Note that I make up the x-min to use (ceiling(mean(d))+100). Changing that will change your answers.
I have a series of 100 integer values which I need to reduce/subsample to 77 values for the purpose of fitting into a predefined space on screen. This gives a fraction of 77/100 values-per-pixel - not very neat.
Assuming the 77 is fixed and cannot be changed, what are some typical techniques for subsampling 100 numbers down to 77. I get a sense that it will be a jagged mapping, by which I mean the first new value is the average of [0, 1] then the next value is [3], then average [4, 5] etc. But how do I approach getting the pattern for this mapping?
I am working in C++, although I'm more interested in the technique than implementation.
Thanks in advance.
Either if you downsample or you oversample, you are trying to reconstruct a signal over nonsampled points in time... so you have to make some assumptions.
The sampling theorem tells you that if you sample a signal knowing that it has no frequency components over half the sampling frequency, you can continously and completely recover the signal over the whole timing period. There's a way to reconstruct the signal using sinc() functions (this is sin(x)/x)
sinc() (indeed sin(M_PI/Sampling_period*x)/M_PI/x) is a function that has the following properties:
Its value is 1 for x == 0.0 and 0 for x == k*Sampling_period with k == 0, +-1, +-2, ...
It has no frequency component over half of the sampling_frequency derived from Sampling_period.
So if you consider the sum of the functions F_x(x) = Y[k]*sinc(x/Sampling_period - k) to be the sinc function that equals the sampling value at position k and 0 at other sampling value and sum over all k in your sample, you'll get the best continous function that has the properties of not having components on frequencies over half the sampling frequency and have the same values as your samples set.
Said this, you can resample this function at whatever position you like, getting the best way to resample your data.
This is by far, a complicated way of resampling data, (it has also the problem of not being causal, so it cannot be implemented in real time) and you have several methods used in the past to simplify the interpolation. you have to constructo all the sinc functions for each sample point and add them together. Then you have to resample the resultant function to the new sampling points and give that as a result.
Next is an example of the interpolation method just described. It accepts some input data (in_sz samples) and output interpolated data with the method described before (I supposed the extremums coincide, which makes N+1 samples equal N+1 samples, and this makes the somewhat intrincate calculations of (in_sz - 1)/(out_sz - 1) in the code (change to in_sz/out_sz if you want to make plain N samples -> M samples conversion:
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
/* normalized sinc function */
double sinc(double x)
{
x *= M_PI;
if (x == 0.0) return 1.0;
return sin(x)/x;
} /* sinc */
/* interpolate a function made of in samples at point x */
double sinc_approx(double in[], size_t in_sz, double x)
{
int i;
double res = 0.0;
for (i = 0; i < in_sz; i++)
res += in[i] * sinc(x - i);
return res;
} /* sinc_approx */
/* do the actual resampling. Change (in_sz - 1)/(out_sz - 1) if you
* don't want the initial and final samples coincide, as is done here.
*/
void resample_sinc(
double in[],
size_t in_sz,
double out[],
size_t out_sz)
{
int i;
double dx = (double) (in_sz-1) / (out_sz-1);
for (i = 0; i < out_sz; i++)
out[i] = sinc_approx(in, in_sz, i*dx);
}
/* test case */
int main()
{
double in[] = {
0.0, 1.0, 0.5, 0.2, 0.1, 0.0,
};
const size_t in_sz = sizeof in / sizeof in[0];
const size_t out_sz = 5;
double out[out_sz];
int i;
for (i = 0; i < in_sz; i++)
printf("in[%d] = %.6f\n", i, in[i]);
resample_sinc(in, in_sz, out, out_sz);
for (i = 0; i < out_sz; i++)
printf("out[%.6f] = %.6f\n", (double) i * (in_sz-1)/(out_sz-1), out[i]);
return EXIT_SUCCESS;
} /* main */
There are different ways of interpolation (see wikipedia)
The linear one would be something like:
std::array<int, 77> sampling(const std::array<int, 100>& a)
{
std::array<int, 77> res;
for (int i = 0; i != 76; ++i) {
int index = i * 99 / 76;
int p = i * 99 % 76;
res[i] = ((p * a[index + 1]) + ((76 - p) * a[index])) / 76;
}
res[76] = a[99]; // done outside of loop to avoid out of bound access (0 * a[100])
return res;
}
Live example
Create 77 new pixels based on the weighted average of their positions.
As a toy example, think about the 3 pixel case which you want to subsample to 2.
Original (denote as multidimensional array original with RGB as [0, 1, 2]):
|----|----|----|
Subsample (denote as multidimensional array subsample with RGB as [0, 1, 2]):
|------|------|
Here, it is intuitive to see that the first subsample seems like 2/3 of the first original pixel and 1/3 of the next.
For the first subsample pixel, subsample[0], you make it the RGB average of the m original pixels that overlap, in this case original[0] and original[1]. But we do so in weighted fashion.
subsample[0][0] = original[0][0] * 2/3 + original[1][0] * 1/3 # for red
subsample[0][1] = original[0][1] * 2/3 + original[1][1] * 1/3 # for green
subsample[0][2] = original[0][2] * 2/3 + original[1][2] * 1/3 # for blue
In this example original[1][2] is the green component of the second original pixel.
Keep in mind for different subsampling you'll have to determine the set of original cells that contribute to the subsample, and then normalize to find the relative weights of each.
There are much more complex graphics techniques, but this one is simple and works.
Everything depends on what you wish to do with the data - how do you want to visualize it.
A very simple approach would be to render to a 100-wide image, and then smooth scale the image down to a narrower size. Whatever graphics/development framework you're using will surely support such an operation.
Say, though, that your goal might be to retain certain qualities of the data, such as minima and maxima. In such a case, for each bin, you're drawing a line of darker color up to the minimum value, and then continue with a lighter color up to the maximum. Or, you could, instead of just putting a pixel at the average value, you draw a line from the minimum to the maximum.
Finally, you might wish to render as if you had 77 values only - then the goal is to somehow transform the 100 values down to 77. This will imply some kind of an interpolation. Linear or quadratic interpolation is easy, but adds distortions to the signal. Ideally, you'd probably want to throw a sinc interpolator at the problem. A good list of them can be found here. For theoretical background, look here.
I have a pretty large data (does not fit into a GPU memory) containing many vectors where each vector is several MBs.
I'd like to calculate, using multiple GPU devices, the Gram matrix using a gaussian kernel.
In other words, for every pair of vectors x,y, I need to calculate the norm of x-y. So if I have N vectors, I have (N^2+N)/2 such pairs. I don't care about saving space or time by taking advantage of the symmetry, it can do the whole N^2.
How can I do it with VexCL? I know its the only library supporting multiple GPUs, and I did pretty much doing it effectively with plain OpenCL with no success so far.
Please note that the dataset won't even fit the machine's RAM, I'm reading blocks of vectors from a memory mapped file.
Thanks much!!
You will obviously need to split your vectors into groups of, say, m, load the groups one by one (or, rather, two by two) onto your GPUs and do the computations. Here is a complete program that does the computation (as I understood it) for the two currently loaded chunks:
#include <vexcl/vexcl.hpp>
int main() {
const size_t n = 1024; // Each vector size.
const size_t m = 4; // Number of vectors in a chunk.
vex::Context ctx( vex::Filter::Count(1) );
// The input vectors...
vex::vector<double> chunk1(ctx, m * n);
vex::vector<double> chunk2(ctx, m * n);
// ... with some data.
chunk1 = vex::element_index();
chunk2 = vex::element_index();
vex::vector<double> gram(ctx, m * m); // The current chunk of Gram matrix to fill.
/*
* chunk1 and chunk2 both have dimensions [m][n].
* We want to take each of chunk2 m rows, subtract those from each of
* chunk1 rows, and reduce the result along the dimension n.
*
* In order to do this, we create two virtual 3D matrices (x and y below,
* those are just expressions and are never instantiated) sized [m][m][n],
* where
*
* x[i][j][k] = chunk1[i][k] for each j, and
* y[i][j][k] = chunk2[j][k] for each i.
*
* Then what we need to compute is
*
* gram[i][j] = sum_k( (x[i][j][k] - y[i][j][k])^2 );
*
* Here it goes:
*/
using vex::extents;
auto x = vex::reshape(chunk1, extents[m][m][n], extents[0][2]);
auto y = vex::reshape(chunk2, extents[m][m][n], extents[1][2]);
// The single OpenCL kernel is generated and launched here:
gram = vex::reduce<vex::SUM>(
extents[m][m][n], // The dimensions of the expression to reduce.
pow(x - y, 2.0), // The expression to reduce.
2 // The dimension to reduce along.
);
// Copy the result to host, spread it across your complete gram matrix.
// I am lazy though, so let's just dump it to std::cout:
std::cout << gram << std::endl;
}
I suggest you load chunk1 once, then in sequence load all chunk2 variants and do the computations, then load next chunk1, etc. etc. Note that slicing, reshaping, and multidimensional reduction operations are only supported for a context with a single compute device in it. So what is left is how to spread the computations across all of your compute devices. The easiest way to do this is probably to create single VexCL context that would grab all available GPUs, and then create vectors of command queues out of it:
vex::Context ctx( vex::Filter::Any );
std::vector<std::vector<vex::command_queue>> q;
for(size_t d = 0; d < ctx.size(); ++d)
q.push_back({ctx.queue(d)});
//...
// In a std::thread perhaps:
chunk1(q[d], m * n);
chunk2(q[d], m * n);
// ...
I hope this is enough to get you started.
I updated the code.
What i am trying to do is to hold every lagrange's coefficient values in pointer d.(for example for L1(x) d[0] would be "x-x2/x1-x2" ,d1 would be (x-x2/x1-x2)*(x-x3/x1-x3) etc.
My problem is
1) how to initialize d ( i did d[0]=(z-x[i])/(x[k]-x[i]) but i think it's not right the "d[0]"
2) how to initialize L_coeff. ( i am using L_coeff=new double[0] but am not sure if it's right.
The exercise is:
Find Lagrange's polynomial approximation for y(x)=cos(π x), x ∈−1,1 using 5 points
(x = -1, -0.5, 0, 0.5, and 1).
#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <cmath>
using namespace std;
const double pi=3.14159265358979323846264338327950288;
// my function
double f(double x){
return (cos(pi*x));
}
//function to compute lagrange polynomial
double lagrange_polynomial(int N,double *x){
//N = degree of polynomial
double z,y;
double *L_coeff=new double [0];//L_coefficients of every Lagrange L_coefficient
double *d;//hold the polynomials values for every Lagrange coefficient
int k,i;
//computations for finding lagrange polynomial
//double sum=0;
for (k=0;k<N+1;k++){
for ( i=0;i<N+1;i++){
if (i==0) continue;
d[0]=(z-x[i])/(x[k]-x[i]);//initialization
if (i==k) L_coeff[k]=1.0;
else if (i!=k){
L_coeff[k]*=d[i];
}
}
cout <<"\nL("<<k<<") = "<<d[i]<<"\t\t\tf(x)= "<<f(x[k])<<endl;
}
}
int main()
{
double deg,result;
double *x;
cout <<"Give the degree of the polynomial :"<<endl;
cin >>deg;
for (int i=0;i<deg+1;i++){
cout <<"\nGive the points of interpolation : "<<endl;
cin >> x[i];
}
cout <<"\nThe Lagrange L_coefficients are: "<<endl;
result=lagrange_polynomial(deg,x);
return 0;
}
Here is an example of lagrange polynomial
As this seems to be homework, I am not going to give you an exhaustive answer, but rather try to send you on the right track.
How do you represent polynomials in a computer software? The intuitive version you want to archive as a symbolic expression like 3x^3+5x^2-4 is very unpractical for further computations.
The polynomial is defined fully by saving (and outputting) it's coefficients.
What you are doing above is hoping that C++ does some algebraic manipulations for you and simplify your product with a symbolic variable. This is nothing C++ can do without quite a lot of effort.
You have two options:
Either use a proper computer algebra system that can do symbolic manipulations (Maple or Mathematica are some examples)
If you are bound to C++ you have to think a bit more how the single coefficients of the polynomial can be computed. You programs output can only be a list of numbers (which you could, of course, format as a nice looking string according to a symbolic expression).
Hope this gives you some ideas how to start.
Edit 1
You still have an undefined expression in your code, as you never set any value to y. This leaves prod*=(y-x[i])/(x[k]-x[i]) as an expression that will not return meaningful data. C++ can only work with numbers, and y is no number for you right now, but you think of it as symbol.
You could evaluate the lagrange approximation at, say the value 1, if you would set y=1 in your code. This would give you the (as far as I can see right now) correct function value, but no description of the function itself.
Maybe you should take a pen and a piece of paper first and try to write down the expression as precise Math. Try to get a real grip on what you want to compute. If you did that, maybe you come back here and tell us your thoughts. This should help you to understand what is going on in there.
And always remember: C++ needs numbers, not symbols. Whenever you have a symbol in an expression on your piece of paper that you do not know the value of you can either find a way how to compute the value out of the known values or you have to eliminate the need to compute using this symbol.
P.S.: It is not considered good style to post identical questions in multiple discussion boards at once...
Edit 2
Now you evaluate the function at point y=0.3. This is the way to go if you want to evaluate the polynomial. However, as you stated, you want all coefficients of the polynomial.
Again, I still feel you did not understand the math behind the problem. Maybe I will give you a small example. I am going to use the notation as it is used in the wikipedia article.
Suppose we had k=2 and x=-1, 1. Furthermore, let my just name your cos-Function f, for simplicity. (The notation will get rather ugly without latex...) Then the lagrangian polynomial is defined as
f(x_0) * l_0(x) + f(x_1)*l_1(x)
where (by doing the simplifications again symbolically)
l_0(x)= (x - x_1)/(x_0 - x_1) = -1/2 * (x-1) = -1/2 *x + 1/2
l_1(x)= (x - x_0)/(x_1 - x_0) = 1/2 * (x+1) = 1/2 * x + 1/2
So, you lagrangian polynomial is
f(x_0) * (-1/2 *x + 1/2) + f(x_1) * 1/2 * x + 1/2
= 1/2 * (f(x_1) - f(x_0)) * x + 1/2 * (f(x_0) + f(x_1))
So, the coefficients you want to compute would be 1/2 * (f(x_1) - f(x_0)) and 1/2 * (f(x_0) + f(x_1)).
Your task is now to find an algorithm that does the simplification I did, but without using symbols. If you know how to compute the coefficients of the l_j, you are basically done, as you then just can add up those multiplied with the corresponding value of f.
So, even further broken down, you have to find a way to multiply the quotients in the l_j with each other on a component-by-component basis. Figure out how this is done and you are a nearly done.
Edit 3
Okay, lets get a little bit less vague.
We first want to compute the L_i(x). Those are just products of linear functions. As said before, we have to represent each polynomial as an array of coefficients. For good style, I will use std::vector instead of this array. Then, we could define the data structure holding the coefficients of L_1(x) like this:
std::vector L1 = std::vector(5);
// Lets assume our polynomial would then have the form
// L1[0] + L2[1]*x^1 + L2[2]*x^2 + L2[3]*x^3 + L2[4]*x^4
Now we want to fill this polynomial with values.
// First we have start with the polynomial 1 (which is of degree 0)
// Therefore set L1 accordingly:
L1[0] = 1;
L1[1] = 0; L1[2] = 0; L1[3] = 0; L1[4] = 0;
// Of course you could do this more elegant (using std::vectors constructor, for example)
for (int i = 0; i < N+1; ++i) {
if (i==0) continue; /// For i=0, there will be no polynomial multiplication
// Otherwise, we have to multiply L1 with the polynomial
// (x - x[i]) / (x[0] - x[i])
// First, note that (x[0] - x[i]) ist just a scalar; we will save it:
double c = (x[0] - x[i]);
// Now we multiply L_1 first with (x-x[1]). How does this multiplication change our
// coefficients? Easy enough: The coefficient of x^1 for example is just
// L1[0] - L1[1] * x[1]. Other coefficients are done similary. Futhermore, we have
// to divide by c, which leaves our coefficient as
// (L1[0] - L1[1] * x[1])/c. Let's apply this to the vector:
L1[4] = (L1[3] - L1[4] * x[1])/c;
L1[3] = (L1[2] - L1[3] * x[1])/c;
L1[2] = (L1[1] - L1[2] * x[1])/c;
L1[1] = (L1[0] - L1[1] * x[1])/c;
L1[0] = ( - L1[0] * x[1])/c;
// There we are, polynomial updated.
}
This, of course, has to be done for all L_i Afterwards, the L_i have to be added and multiplied with the function. That is for you to figure out. (Note that I made quite a lot of inefficient stuff up there, but I hope this helps you understanding the details better.)
Hopefully this gives you some idea how you could proceed.
The variable y is actually not a variable in your code but represents the variable P(y) of your lagrange approximation.
Thus, you have to understand the calculations prod*=(y-x[i])/(x[k]-x[i]) and sum+=prod*f not directly but symbolically.
You may get around this by defining your approximation by a series
c[0] * y^0 + c[1] * y^1 + ...
represented by an array c[] within the code. Then you can e.g. implement multiplication
d = c * (y-x[i])/(x[k]-x[i])
coefficient-wise like
d[i] = -c[i]*x[i]/(x[k]-x[i]) + c[i-1]/(x[k]-x[i])
The same way you have to implement addition and assignments on a component basis.
The result will then always be the coefficients of your series representation in the variable y.
Just a few comments in addition to the existing responses.
The exercise is: Find Lagrange's polynomial approximation for y(x)=cos(π x), x ∈ [-1,1] using 5 points (x = -1, -0.5, 0, 0.5, and 1).
The first thing that your main() does is to ask for the degree of the polynomial. You should not be doing that. The degree of the polynomial is fully specified by the number of control points. In this case you should be constructing the unique fourth-order Lagrange polynomial that passes through the five points (xi, cos(π xi)), where the xi values are those five specified points.
const double pi=3.1415;
This value is not good for a float, let alone a double. You should be using something like const double pi=3.14159265358979323846264338327950288;
Or better yet, don't use pi at all. You should know exactly what the y values are that correspond to the given x values. What are cos(-π), cos(-π/2), cos(0), cos(π/2), and cos(π)?