Cubic spline / curve fitting - computer-vision

I need to determine parameters of Illumintaion change, which is defined by this continuous piece-wise polynomial C(t), where f(t) is is a cubic curve defined by the two boundary points (t1,c) and (t2,0), also f'(t1)=0 and f'(t2)=0.
Original Paper: Texture-Consistent Shadow Removal
Intensity curve is sampled from the normal on boundary of shadow and it looks like this:
Each row is sample, displaying illumintaion change.So X is number of column and Y is intensity of pixel.
I have my real data like this (one sample avaraged from all samples):
At all I have N samples and I need to determine parameters (c,t1,t2)
How can I do it?
I tried to do it by solving linear equation in Matlab:
avr_curve is average curve, obtained by averaging over all samples.
f(x)= x^3+a2*x^2+a1*x1+a0 is cubic function
%t1,t2 selected by hand
t1= 10;
t2= 15;
offset=10;
avr_curve= [41, 40, 40, 41, 41, 42, 42, 43, 43, 43, 51, 76, 98, 104, 104, 103, 104, 105, 105, 107, 105];
%gradx= convn(avr_curve,[-1 1],'same');
A= zeros(2*offset+1,3);
%b= zeros(2*offset+1,1);
b= avr_curve';
%for i= 1:2*offset+1
for i=t1:t2
i
x= i-offset-1
A(i,1)= x^2; %a2
A(i,2)= x; %a1
A(i,3)= 1; %a0
b(i,1)= b(i,1)-x^3;
end
u= A\b;
figure,plot(avr_curve(t1:t2))
%estimated cubic curve
for i= 1:2*offset+1
x= i-offset-1;
fx(i)=x^3+u(1)*x^2+u(2)*x+u(3);
end
figure,plot(fx(t1:t2))
part of avr_curve on [t1 t2]
cubic curve that I got (don't looks like avr_curve)
so what I'm doing wrong?
UPDATE:
Seems my error was due that I model cubic polynomial using 3 variables like this:
f(x)= x^3+a2*x^2+a1*x1+a0 - 3 variables
but then I use 4 variables everything seems ok:
f(x)= a3*x^3+a2*x^2+a1*x1+a0 - 4 variables
Here is the code in Matlab:
%defined by hand
t1= 10;
t2= 14;
avr_curve= [41, 40, 40, 41, 41, 42, 42, 43, 43, 43, 51, 76, 98, 104, 104, 103, 104, 105, 105, 107, 105];
x= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21];
%x= [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; %real x axis
%%%model 1
%%f(x)= x^3+a2*x^2+a1*x1+a0 - 3 variables
%A= zeros(4,3);
%b= [43 104]';
%%cubic equation at t1
%A(1,1)= t1^2; %a2
%A(1,2)= t1; %a1
%A(1,3)= 1; %a0
%b(1,1)= b(1,1)-t1^3;
%%cubic equation at t2
%A(2,1)= t2^2; %a2
%A(2,2)= t2; %a1
%A(2,3)= 1; %a0
%b(2,1)= b(2,1)-t1^3;
%%1st derivative at t1
%A(3,1)= 2*t1; %a2
%A(3,2)= 1; %a1
%A(3,3)= 0; %a0
%b(3,1)= -3*t1^2;
%%1st derivative at t2
%A(4,1)= 2*t2; %a2
%A(4,2)= 1; %a1
%A(4,3)= 0; %a0
%b(4,1)= -3*t2^2;
%model 2
%f(x)= a3*x^3+a2*x^2+a1*x1+a0 - 4 variables
A= zeros(4,4);
b= [43 104]';
%cubic equation at t1
A(1,1)= t1^3; %a3
A(1,2)= t1^2; %a2
A(1,3)= t1; %a1
A(1,4)= 1; %a0
b(1,1)= b(1,1);
%cubic equation at t2
A(2,1)= t2^3; %a3
A(2,2)= t2^2; %a2
A(2,3)= t2; %a1
A(2,4)= 1; %a0
b(2,1)= b(2,1);
%1st derivative at t1
A(3,1)= 3*t1^2; %a3
A(3,2)= 2*t1; %a2
A(3,3)= 1; %a1
A(3,4)= 0; %a0
b(3,1)= 0;
%1st derivative at t2
A(4,1)= 3*t2^2; %a3
A(4,2)= 2*t2; %a2
A(4,3)= 1; %a1
A(4,4)= 0; %a0
b(4,1)= 0;
size(A)
size(b)
u= A\b;
u
%estimated cubic curve
%dx=[1:21]; % global view
dx=t1-1:t2+1; % local view in [t1 t2]
for x= dx
%fx(x)=x^3+u(1)*x^2+u(2)*x+u(3); % model 1
fx(x)= u(1)*x^3+u(2)*x^2+u(3)*x+u(4); % model 2
end
err= 0;
for x= dx
err= err+(fx(x)-avr_curve(x))^2;
end
err
figure,plot(dx,avr_curve(dx),dx,fx(dx))
spline on interval [t1-1 t2+1]
and on full interval

Disclaimer
I cannot give any guarantees on the correctness of the code or methods given below, always use your critical sense before using any of that.
0. Define the problem
You have this piecewise defined function
Where f(t) is a cubic function, in order to uniquely identify it, the following additional conditions are given
You want to recover the best values of the parameters t1, t2 and sigma that minimize the error with a given set of points.
This is essentially a curve fitting in the least squares sense.
1 Parametrize the f(t) cubic function
In order to compute the error between a candidate Cl(t) function and the set of points we need to compute f(t), its general form (being a cubic) is
So it seems that we have four additional parameters to consider. Indeed this parameters are totally defined by the free three parameters t1, t2 and sigma.
It is important to not confuse the free parameters with the dependent ones.
Given the additional conditions on f(t) we can set up this linear system
Which has one solution (as expected) given by
This tell us how to compute the parameters of the cubic given the three free parameters.
This way Cl(t) is completely determined, now it's time to find the best candidate.
2 Minimize the error
I would normally go for the least squares now.
Since this is not a linear function, there is no closed form for computing sigma, t1 and t2.
There are however numerical methods, like the Gauss-Newton one.
However one way or another it is required to compute the partial derivatives with respect of the three parameters.
I don't know how to compute the derivative with respect of a separation parameter like t1.
I've searched MathSE and found this question that address the same problem, however nobody answered.
Without the partial derivatives the least squares methods are over.
So I take a more practical road and implemented a brute force function in C that try every possible triplet of parameter and return the best match.
3 The brute force function
Given the nature of the problem, this turned out to be O(n^2) in the number of sample.
The algorithm proceeds as follow: Divide the sample set in three parts, the first one is the part of point before t1, the second one of the points between t1 and t2 and the last one of the points after t2.
The first part only is used to compute sigma, sigma is simply the arithmetic average of the points in part 1.
t1 and t2 are computed through a cycle, t1 is set to every possible point in the original points set, starting from the second and going forward.
For every choice of t1, t2 is set to every possible point after t1.
At each iteration an error is computed and if it is the minimum ever seen, the parameters used are saved.
The error is computer as the absolute value of residuals since the absolute value should be fast (surely faster than square) and it fits the purpose of a metric.
4 The code
#include <stdio.h>
#include <math.h>
float point_on_curve(float sigma, float t1, float t2, float t)
{
float a,b,c,d, K;
if (t <= t1)
return sigma;
if (t >= t2)
return 0;
K = (t1-t2)*(t1-t2)*(t1-t2);
a = -2*sigma/K;
b = 3*sigma*(t1+t2)/K;
c = -6*sigma*t1*t2/K;
d = sigma*t2*t2*(3*t1-t2)/K;
return a*t*t*t + b*t*t + c*t + d;
}
float compute_error(float sigma, float t1, float t2, int s, int dx, int* data, int len)
{
float error=0;
unsigned int i;
for (i = 0; i < len; i++)
error += fabs(point_on_curve(sigma, t1, t2, s+i*dx)- data[i]);
return error;
}
/*
* s is the starting time of the samples set
* dx is the separation in time between two sample (a.k.a. sampling period)
* data is the array of samples
* len is the number of samples
* sigma, t1, t2 are pointers to output parameters computed
*
* return 1 if not enough (3) samples, 0 if everything went ok.
*/
int curve_fit(int s, int dx, int* data, unsigned int len, float* sigma, float* t1, float* t2)
{
float l_sigma = 0;
float l_t1, l_t2;
float sum = 0;
float min_error, cur_error;
char error_valid = 0;
unsigned int i, j;
if (len < 3)
return 1;
for (i = 0; i < len; i++)
{
/* Compute sigma as the average of points <= i */
sum += data[i];
l_sigma = sum/(i+1);
/* Set t1 as the point i+1 */
l_t1 = s+(i+1)*dx;
for (j = i+2; j < len; j++)
{
/* Set t2 as the points i+2, i+3, i+4, ... */
l_t2 = s+j*dx;
/* Compute the error */
cur_error = compute_error(l_sigma, l_t1, l_t2, s, dx, data, len);
if (cur_error < min_error || !error_valid)
{
error_valid = 1;
min_error = cur_error;
*sigma = l_sigma;
*t1 = l_t1;
*t2 = l_t2;
}
}
}
return 0;
}
int main()
{
float sigma, t1, t2;
int data[]={41, 40, 40, 41, 41, 42, 42, 43, 43, 43, 51, 76, 98, 104, 104, 103, 104, 105, 105, 107, 105};
unsigned int len = sizeof(data)/sizeof(int);
unsigned int i;
for (i = 0; i < len; i++)
data[i] -= 107; /* Subtract the max */
if (curve_fit(1,1,data, len, &sigma, &t1, &t2))
printf("Not enough data!\n");
else
printf("Parameters: sigma = %.3f, t1 = %.3f, t2 = %.3f\n", sigma, t1, t2);
return 0;
}
Please note that the Cl(t) was defined as having 0 as its right limit, so the code assume this is the case.
This is why the max value (107) is subtracted from every sample, I have worked with the definition of Cl(t) given at the beginning and only late noted that the sample were biased.
By now I'm not going to adapt the code, you can easily add another parameter in the problem and redo the steps from 1 if needed, or simply translate the samples using the maximum value.
The output of the code is
Parameters: sigma = -65.556, t1 = 10.000, t2 = 14.000
Which match the points set given, considering that it is vertically translated by -107.

Related

How to draw b-spline curve using this math algorithm

I have to use this formula in order to draw 3rd degree b-spline curve
Can someone give me advice what am I doing wrong in my code? Doesn't seem to work properly for me and I am getting this weird results when trying to draw the curve
segment is a vector of QPoint, it has x and y
void MyWindow::calculateCurve() {
QPoint result;
int m = segment.size();
int from = m-3;
int to = m-2;
for(double t = 0.0; t<=to; t+=0.001){
result = (pow(-t, 3)+3*pow(t,2)+1)/6*(segment[segment.size()-3])+
(3*pow(t,3)-6*pow(t,2)+4)/6*(segment[segment.size()-2])+
(pow(-3*t,3)+3*pow(t,2)+3*t+1)/6*(segment[segment.size()-1]) +
(pow(t,3)/6)*(segment[segment.size()])
;
draw(result.x(), result.y());
}
}
Most often we define a common range for the parameter t (i.e. for
the whole curve, not for each segment separately). We can
e.g. assume that t ∈ [0, m - 2]. Then, for the segment Q3
parameter t varies from t3 = 0 to t4 = 1, for segment
Q4 from t4 = 1 to t5 = 2, and for the last segment Qm from
tm = m - 3 to tm+1 = m - 2.
You have written pow(-3*t,3), which means (-3t)³, but you should have written -3*pow(t,3), that is, -3(t³):
result = (pow(-t, 3)+3*pow(t,2)+1)/6*(segment[segment.size()-3])+
(3*pow(t,3)-6*pow(t,2)+4)/6*(segment[segment.size()-2])+
(-3*pow(t,3)+3*pow(t,2)+3*t+1)/6*(segment[segment.size()-1]) +
(pow(t,3)/6)*(segment[segment.size()])
;

Grid nearest neighbour BFS slow

Im trying to upsample my image. I fill the upsampled version with corresponding pixels in this way.
pseudocode:
upsampled.getPixel(((int)(x * factorX), (int)(y * factorY))) = old.getPixel(x, y)
as a result i end up with the bitmap that is not completely filled and I try to fill each not filled pixel with it's nearest filled neighbor.
I use this method for nn search and call it for each unfilled pixel. I do not flag unfilled pixel as filled after changing its value as it may create some weird patterns. The problem is that - it works but very slow. Execution time on my i7 9700k for 2500 x 3000 img scaled by factor x = 1,5 and y = 1,5 takes about 10 seconds.
template<typename T>
std::pair<int, int> cn::Utils::nearestNeighbour(const Bitmap <T> &bitmap, const std::pair<int, int> &point, int channel, const bool *filledArr) {
auto belongs = [](const cn::Bitmap<T> &bitmap, const std::pair<int, int> &point){
return point.first >= 0 && point.first < bitmap.w && point.second >= 0 && point.second < bitmap.h;
};
if(!(belongs(bitmap, point))){
throw std::out_of_range("This point does not belong to bitmap!");
}
auto hash = [](std::pair<int, int> const &pair){
std::size_t h1 = std::hash<int>()(pair.first);
std::size_t h2 = std::hash<int>()(pair.second);
return h1 ^ h2;
};
//from where, point
std::queue<std::pair<int, int>> queue;
queue.push(point);
std::unordered_set<std::pair<int, int>, decltype(hash)> visited(10, hash);
while (!queue.empty()){
auto p = queue.front();
queue.pop();
visited.insert(p);
if(belongs(bitmap, p)){
if(filledArr[bitmap.getDataIndex(p.first, p.second, channel)]){
return {p.first, p.second};
}
std::vector<std::pair<int,int>> neighbors(4);
neighbors[0] = {p.first - 1, p.second};
neighbors[1] = {p.first + 1, p.second};
neighbors[2] = {p.first, p.second - 1};
neighbors[3] = {p.first, p.second + 1};
for(auto n : neighbors) {
if (visited.find(n) == visited.end()) {
queue.push(n);
}
}
}
}
return std::pair<int, int>({-1, -1});
}
the bitmap.getDataIndex() works in O(1) time. Here's its implementation
template<typename T>
int cn::Bitmap<T>::getDataIndex(int col, int row, int depth) const{
if(col >= this->w or col < 0 or row >= this->h or row < 0 or depth >= this->d or depth < 0){
throw std::invalid_argument("cell does not belong to bitmap!");
}
return depth * w * h + row * w + col;
}
I have spent a while on debugging this but could not really find what makes it so slow.
Theoretically when scaling by factor x = 1,5, y = 1,5, the filled pixel should be no further than 2 pixels from unfilled one, so well implemented BFS wouldn't take long.
Also i use such encoding for bitmap, example for 3x3x3 image
* (each row and channel is in ascending order)
* {00, 01, 02}, | {09, 10, 11}, | {18, 19, 20},
c0 {03, 04, 05}, c1{12, 13, 14}, c2{21, 22, 23},
* {06, 07, 08}, | {15, 16, 17}, | {24, 25, 26},
the filled pixel should be no further than 2 pixels from unfilled one, so well implemented BFS wouldn't take long.
Sure, doing it once won’t take long. But you need to do this for almost every pixel in the output image, and doing lots of times something that doesn’t take long will still take long.
Instead of searching for a set pixel, use the information you have about the earlier computation to directly find the values you are looking for.
For example, in your output image, and set pixel, is at ((int)(x * factorX), (int)(y * factorY)), for integer x and y. So for a non-set pixel (a, b), you can find the nearest set pixel by ((int)(round(a/factorX)*factorX), (int)(round(b/factorY)*factorY)).
However, you are much better off directly upsampling the image in a simpler way: don’t loop over the input pixels, instead loop over the output pixels, and find the corresponding input pixel.

Very high inaccuray when calculating inverse of matrix using gauss elimination

I am working on a c++ codebase right now which uses a matrix library to calculate various things. One of those things is calculating the inverse of a matrix. It uses gauss elimation to achieve that. But the result is very inaccurate. So much so that multiplying the inverse matrix with the original matrix isn't even close the the identity matrix.
Here is the code that is used to calculate the inverse, the matrix is templated on a numerical type and the rows and columns:
/// \brief Take the inverse of the matrix.
/// \return A new matrix which is the inverse of the current one.
matrix<T, M, M> inverse() const
{
static_assert(M == N, "Inverse matrix is only defined for square matrices.");
// augmented the current matrix with the identiy matrix.
auto augmented = this->augment(matrix<T, M, M>::get_identity());
for (std::size_t i = 0; i < M; i++)
{
// divide the current row by the diagonal element.
auto divisor = augmented[i][i];
for (std::size_t j = 0; j < 2 * M; j++)
{
augmented[i][j] /= divisor;
}
// For each element in the column of the diagonal element that is currently selected
// set all element in that column to 0 except the diagonal element by using the currently selected row diagonal element.
for (std::size_t j = 0; j < M; j++)
{
if (i == j)
{
continue;
}
auto multiplier = augmented[j][i];
for (std::size_t k = 0; k < 2 * M; k++)
{
augmented[j][k] -= multiplier * augmented[i][k];
}
}
}
// Slice of the the new identity matrix on the left side.
return augmented.template slice<0, M, M, M>();
}
Now I have made a unit test which test if the inverse is correct using pre computed values. I try two matrices one 3x3 and one 4x4. I used this website to compute the inverse: https://matrix.reshish.com/ and they do match to a certain degree. since the unit test does succeed. But once I calculate the original matrix * the inverse nothing even resembling an identity matrix is achieved. See the comment in the code below.
BOOST_AUTO_TEST_CASE(matrix_inverse)
{
auto m1 = matrix<double, 3, 3>({
{7, 8, 9},
{10, 11, 12},
{13, 14, 15}
});
auto inverse_result1 = matrix<double,3, 3>({
{264917625139441.28, -529835250278885.3, 264917625139443.47},
{-529835250278883.75, 1059670500557768, -529835250278884.1},
{264917625139442.4, -529835250278882.94, 264917625139440.94}
});
auto m2 = matrix<double, 4, 4>({
{7, 8, 9, 23},
{10, 11, 12, 81},
{13, 14, 15, 11},
{1, 73, 42, 65}
});
auto inverse_result2 = matrix<double, 4, 4>({
{-0.928094660194201, 0.21541262135922956, 0.4117111650485529, -0.009708737864078209},
{-0.9641231796116679, 0.20979975728155775, 0.3562651699029188, 0.019417475728154842},
{1.7099261731391882, -0.39396237864078376, -0.6169346682848 , -0.009708737864076772 },
{-0.007812499999999244, 0.01562499999999983, -0.007812500000000278, 0}
});
// std::cout << (m1.inverse() * m1) << std::endl;
// results in
// 0.500000000 1.000000000 -0.500000000
// 1.000000000 0.000000000 0.500000000
// 0.500000000 -1.000000000 1.000000000
// std::cout << (m2.inverse() * m2) << std::endl;
// results in
// 0.396541262 -0.646237864 -0.689016990 -2.162317961
// 1.206917476 2.292475728 1.378033981 3.324635922
// -0.884708738 -0.958737864 -0.032766990 -3.756067961
// -0.000000000 -0.000000000 -0.000000000 1.000000000
BOOST_REQUIRE_MESSAGE(
m1.inverse().fuzzy_equal(inverse_result1, 0.1) == true,
"3x3 inverse is not the expected result."
);
BOOST_REQUIRE_MESSAGE(
m2.inverse().fuzzy_equal(inverse_result2, 0.1) == true,
"4x4 inverse is not the expected result."
);
}
I am at my wits end. I am by no means a specialist on matrix math since I had to learn it all on the job but this really is stumping me.
The complete code matrix class is available at:
https://codeshare.io/johnsmith
Line 404 is where the inverse function is located.
Any help is appreciated.
As already established in the comments the matrix of interest is singular and thus there is no inverse.
Great, your testing found already the first issue in the code - this case isn't handled properly, no error is raised.
The bigger problem is, that this is not easy to detect: If there where no errors due to rounding errors, it would be a cake of piece - just test that divisor isn't 0! But there are rounding errors in floating operations, so divisor will be a very small nonzero number.
And there is no way to tell, whether this nonzero value due to rounding errors or to the fact that the matrix is near singular (but not singular). However, if matrix is near singular it has a poor condition and thus the results cannot be trusted anyway.
So ideally, the algorithm should not only calculate the inverse, but also (estimate) the condition of the original matrix, so the caller can react upon a bad condition.
Probably it is wise to use well-known and well-tested libraries for this kind of calculation - there is a lot to be considered and what can be done wrong.

Query points on the vertices of a Hamming cube

I have N points that lie only on the vertices of a cube, of dimension D, where D is something like 3.
A vertex may not contain any point. So every point has coordinates in {0, 1}D. I am only interested in query time, as long as the memory cost is reasonable ( not exponential in N for example :) ).
Given a query that lies on one of the cube's vertices and an input parameter r, find all the vertices (thus points) that have hamming distance <= r with the query.
What's the way to go in a c++ environment?
I am thinking of a kd-tree, but I am not sure and want help, any input, even approximative, would be appreciated! Since hamming distance comes into play, bitwise manipulations should help (e.g. XOR).
There is a nice bithack to go from one bitmask with k bits set to the lexicographically next permutation, which means it's fairly simple to loop through all masks with k bits set. XORing these masks with an initial value gives all the values at hamming distance exactly k away from it.
So for D dimensions, where D is less than 32 (otherwise change the types),
uint32_t limit = (1u << D) - 1;
for (int k = 1; k <= r; k++) {
uint32_t diff = (1u << k) - 1;
while (diff <= limit) {
// v is the input vertex
uint32_t vertex = v ^ diff;
// use it
diff = nextBitPermutation(diff);
}
}
Where nextBitPermutation may be implemented in C++ as something like (if you have __builtin_ctz)
uint32_t nextBitPermutation(uint32_t v) {
// see https://graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation
uint32_t t = v | (v - 1);
return (t + 1) | (((~t & -~t) - 1) >> (__builtin_ctz(v) + 1));
}
Or for MSVC (not tested)
uint32_t nextBitPermutation(uint32_t v) {
// see https://graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation
uint32_t t = v | (v - 1);
unsigned long tzc;
_BitScanForward(&tzc, v); // v != 0 so the return value doesn't matter
return (t + 1) | (((~t & -~t) - 1) >> (tzc + 1));
}
If D is really low, 4 or lower, the old popcnt-with-pshufb works really well and generally everything just lines up well, like this:
uint16_t query(int vertex, int r, int8_t* validmask)
{
// validmask should be array of 16 int8_t's,
// 0 for a vertex that doesn't exist, -1 if it does
__m128i valid = _mm_loadu_si128((__m128i*)validmask);
__m128i t0 = _mm_set1_epi8(vertex);
__m128i r0 = _mm_set1_epi8(r + 1);
__m128i all = _mm_setr_epi8(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15);
__m128i popcnt_lut = _mm_setr_epi8(0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4);
__m128i dist = _mm_shuffle_epi8(popcnt_lut, _mm_xor_si128(t0, all));
__m128i close_enough = _mm_cmpgt_epi8(r0, dist);
__m128i result = _mm_and_si128(close_enough, valid);
return _mm_movemask_epi8(result);
}
This should be fairly fast; fast compared to the bithack above (nextBitPermutation, which is fairly heavy, is used a lot there) and also compared to looping over all vertices and testing whether they are in range (even with builtin popcnt, that automatically takes at least 16 cycles and the above shouldn't, assuming everything is cached or even permanently in a register). The downside is the result is annoying to work with, since it's a mask of which vertices both exist and are in range of the queried point, not a list of them. It would combine well with doing some processing on data associated with the points though.
This also scales down to D=3 of course, just make none of the points >= 8 valid. D>4 can be done similarly but it takes more code then, and since this is really a brute force solution that is only fast due to parallelism it fundamentally gets slower exponentially in D.

Minimization of (z-xi)^2

If I want to find a median (it is equivalent to minimize a function |z - xi|), I can use the following code snippet:
std::vector<int> v{5, 6, 4, 3, 2, 6, 7, 9, 3};
std::nth_element(v.begin(), v.begin() + v.size()/2, v.end());
std::cout << "The median is " << v[v.size()/2] << '\n';
Is there something like this, to find "median" for minimization of (z-xi)^2? That is, I want to find an element of the array in which the sum of these functions will be minimal.
If you want to find the nth_element() according to a predicate comparing (z - xi) ^ 2 you could just add the corresponding logic to the binary predicate you can optionally pass to nth_element():
auto trans = [=](int xi){ return (z - xi) * (z - xi); };
std::nth_element(v.begin(), v.begin() + v.size() / 2, v.end(),
[&](int v0, int v1) { return trans(v0) < trans(v1); });
From the question it isn't clearly whether z or xi is the changing variable. From the looks of it I assumed xi is meant to be xi. If z is changing, just rename the argument in the lambda trans (which I just also gave a = in the capture...).
Your question works on at least two different levels: You're asking how to implement a certain algorithm idiomatically in C++11, and at the same time you're asking for an efficient algorithm for computing the mean of a list of integers.
You correctly observe that to compute the median, all we have to do is run the QuickSelect algorithm with k set equal to n/2. In the C++ standard library, QuickSelect is spelled std::nth_element:
int v[] = { 5, 6, 4, 3, 2, 6, 7, 9, 3 };
const int k = std::size(v) / 2;
std::nth_element(std::begin(v), &v[k], std::end(v)); // mutate in-place
int median = v[v.size()/2]; // now the k'th element is
(For std::size, see proposal N4280, coming soon to a C++17 near you! Until then, use your favorite NELEM macro, or go back to using heap-allocated vector.)
This QuickSelect implementation doesn't really have anything to do with "finding array element xk such that ∑i |xi − xk| is minimized." I mean, it's mathematically equivalent, yes, but there's nothing in the code that corresponds to summing or subtracting integers.
The naïve algorithm to "find array element xk such that ∑i |xi − xk| is minimized" is simply
int v[] = { 5, 6, 4, 3, 2, 6, 7, 9, 3 };
auto sum_of_differences = [v](int xk) {
int result = 0;
for (auto&& xi : v) {
result += std::abs(xi - xk);
}
return result;
};
int median =
std::min_element(std::begin(v), std::end(v), [](int xa, int xb) {
return sum_of_differences(xa) < sum_of_differences(xb);
});
This is a horribly inefficient algorithm, given that QuickSelect does the same job.
However, it's trivial to extend this code to work with any mathematical function you want to "minimize the sum of". Here's the same skeleton of code, but with the function "squared difference" instead of "difference":
int v[] = { 5, 6, 4, 3, 2, 6, 7, 9, 3 };
auto sum_of_squared_differences = [v](int xk) {
int result = 0;
for (auto&& xi : v) {
result += (xi - xk) * (xi - xk);
}
return result;
};
int closest_element_to_the_mean =
std::min_element(std::begin(v), std::end(v), [](int xa, int xb) {
return sum_of_squared_differences(xa) < sum_of_squared_differences(xb);
});
In this case we can also find an improved algorithm; namely, compute the mean up front and only afterward scan the array looking for the element that's closest to that mean:
int v[] = { 5, 6, 4, 3, 2, 6, 7, 9, 3 };
double actual_mean = std::accumulate(std::begin(v), std::end(v), 0.0) / std::size(v);
auto distance_to_actual_mean = [=](int xk) {
return std::abs(xk - actual_mean);
};
int closest_element_to_the_mean =
std::min_element(std::begin(v), std::end(v), [](int xa, int xb) {
return distance_to_actual_mean(xa) < distance_to_actual_mean(xb);
});
(P.S. – remember that none of the above code snippets should be used in practice, unless you're absolutely sure you don't need to care about integer overflow, floating-point rounding error, and a host of other mathy issues.)
Given an array x1, x2, …, xn of integers, the real number z that minimizes ∑i&in;{1,2,…,n} (z - xi)2 is the mean z* = (1/n) ∑i&in;{1,2,…,n} xi. You want to call std::min_element with a comparator that treats xi as less than xj if and only if |n xi - n z*| < |n xj - n z*| (we use n z* = ∑i&in;{1,2,…,n} xi to avoid floating-point arithmetic; there are ways to reduce the extra precision required).