Is it possible to create a vector from a value to another one with a fixed step without using a loop in c++?
For example, I want to build a vector from 1 to 10 with step 0.5. In MATLAB I can do this as follow:
vector = [1:0.5:10];
Is there something similar in c++?
With the help of std::generate_n you can
std::vector<double> v;
const int size = 10 * 2 - 1;
v.reserve(size);
std::generate_n(std::back_inserter(v), size, []() { static double d = 0.5; return d += 0.5; });
LIVE
You need a loop somewhere. Matlab is simply hiding the loop from you. If this is something you do often, just create a function to make it easier to use:
#include <vector>
auto make_vector(double beg, double step, double end)
{
std::vector<double> vec;
vec.reserve((end - beg) / step + 1);
while (beg <= end) {
vec.push_back(beg);
beg += step;
}
return vec;
}
int main() {
auto vec = make_vector(1, 0.5, 10);
}
It's not possible without a loop, but you can hide the loop by using e.g, std::generate or std::generate_n:
constexpr size_t SIZE = (10 - 1) * 2;
std::vector<double> data(SIZE);
double new_value = 1.0;
std::generate(begin(data), end(data) [&new_value]()
{
double current_value = new_value;
new_value += 0.5;
return current_value;
});
Of course, this is quite a lot to write, and an explicit loop would probably be better:
std::vector<double> data;
for (double i = 1.0; i <= 10; i += 0.5)
data.push_back(i);
If the stepping is "one" (e.g. 1 or 1.0) then you could use std::iota instead:
std::vector<double> data(10);
std::iota(begin(data), end(data), 1.0); // Initializes the vector with values from 1.0 to 10.0
No, there is no such things in C++. You will have to create a loop and populate your vector, so something like:
std::vector<double> v;
v.reserve(19);
for(size_t i = 2; i < 21; ++i)
{
v.push_back(i / 2.);
}
I'm using here an integer loop instead of a double loop with .5 increments to make sure about the number of elements I get and minimize numerical rounding errors (.5 is fine, but not 1/3 for instance).
A simple solution using Ranges-v3 library:
#include <range/v3/all.hpp>
auto make_vector(double min, double step, double max) {
const auto size = static_cast<std::size_t>((max - min) / step);
return ranges::views::iota(std::size_t{0}, size + 1) |
ranges::views::transform([min, step](auto i) { return min + step * i; }) |
ranges::to<std::vector>();
}
int main() {
auto vec = make_vector(1, .5, 5);
for (auto x : vec)
std::cout << x << ' ';
// Output: 1 1.5 2 2.5 3 3.5 4 4.5 5
}
Ranges will be a part of C++20.
If you need to do a lot of this, you can easily write your own function that can work with any container (that supports reserve and is fill-able by std::generate):
template <typename TContainer>
TContainer fill(typename TContainer::value_type start,
typename TContainer::value_type step,
typename TContainer::value_type end) {
size_t size = static_cast<size_t>((end - start)/step + 1);
TContainer output(size);
std::generate(std::begin(output), std::end(output),
[&start, step]() {
return std::exchange(start, start + step);
}
);
return output;
}
Then you can use it as follows:
auto vec = fill<std::vector<int>>(0, 2, 10);
auto list = fill<std::list<float>>(1, 0.3, 5);
Live example
And you will get:
vec: 0, 2, 4, 6, 8, 10
list: 1, 1.3, 1.6, 1.9, 2.2, 2.5, 2.8, 3.1, 3.4, 3.7, 4, 4.3, 4.6, 4.9
Related
I am writing the below linear interpolation function, which is meant to be generic, but current result is not.
The function finds desired quantity of equally distant points linear in between two given boundary points. Both desired quantity and boundaries are given as parameters. As return, a vector of linear interpolated values is returned.
The issue I have concerns to return type, which always appear to be integer, even when it should have some mantissa, for example:
vec = interpolatePoints(5, 1, 4);
for (auto val : vec) std::cout << val << std::endl; // prints 4, 3, 2, 1
But it should have printed: 4.2, 3.4, 2.6, 1.8
What should I do to make it generic and have correct return values?
code:
template <class T>
std::vector<T> interpolatePoints(T lower_limit, T high_limit, const unsigned int quantity) {
auto step = ((high_limit - lower_limit)/(double)(quantity+1));
std::vector<T> interpolated_points;
for(unsigned int i = 1; i <= quantity; i++) {
interpolated_points.push_back((std::min(lower_limit, high_limit) + (step*i)));
}
return interpolated_points;
}
After some simplifications the function might look like:
template<typename T, typename N, typename R = std::common_type_t<double, T>>
std::vector<R> interpolate(T lo_limit, T hi_limit, N n) {
const auto lo = static_cast<R>(lo_limit);
const auto hi = static_cast<R>(hi_limit);
const auto step = (hi - lo) / (n + 1);
std::vector<R> pts(n);
const auto gen = [=, i = N{0}]() mutable { return lo + step * ++i; };
std::generate(pts.begin(), pts.end(), gen);
return pts;
}
The type of elements in the returned std::vector is std::common_type_t<double, T>. For int, it is double, for long double, it is long double. double looks like a reasonable default type.
You just have to pass correct type:
auto vec = interpolatePoints(5., 1., 4); // T deduced as double
Demo
And in C++20, you might use std::lerp, to have:
template <class T>
std::vector<T> interpolatePoints(T lower_limit, T high_limit, const unsigned int quantity) {
auto step = 1 / (quantity + 1.);
std::vector<T> interpolated_points;
for(unsigned int i = 1; i <= quantity; i++) {
interpolated_points.push_back(std::lerp(lower_limit, high_limit, step * i));
}
return interpolated_points;
}
Demo
I've got an array (actually std::vector) size ~ 7k elements.
If you draw this data, there will be a diagram of the combustion of the fuel. But I want to minimize this vector from 7k elements to 721 (every 0.5 degree) elements or ~ 1200 (every 0.3 degree). Of course I want save diagram the same. How can I do it?
Now I am getting every 9 element from big vector to new and cutting other evenly from front and back of vector to get 721 size.
QVector <double> newVMTVector;
for(QVector <double>::iterator itv = oldVmtDataVector.begin(); itv < oldVmtDataVector.end() - 9; itv+=9){
newVMTVector.push_back(*itv);
}
auto useless = newVMTVector.size() - 721;
if(useless%2 == 0){
newVMTVector.erase(newVMTVector.begin(), newVMTVector.begin() + useless/2);
newVMTVector.erase(newVMTVector.end() - useless/2, newVMTVector.end());
}
else{
newVMTVector.erase(newVMTVector.begin(), newVMTVector.begin() + useless/2+1);
newVMTVector.erase(newVMTVector.end() - useless/2, newVMTVector.end());
}
newVMTVector.squeeze();
oldVmtDataVector.clear();
oldVmtDataVector = newVMTVector;
I can swear there is an algorithm that averages and reduces the array.
The way I understand it you want to pick the elements [0, k, 2k, 3k ... ] where n is 10 or n is 6.
Here's a simple take:
template <typename It>
It strided_inplace_reduce(It it, It const last, size_t stride) {
It out = it;
if (stride < 1) return last;
while (it < last)
{
*out++ = *it;
std::advance(it, stride);
}
return out;
}
Generalizing a bit for non-random-access iterators:
Live On Coliru
#include <iterator>
namespace detail {
// version for random access iterators
template <typename It>
It strided_inplace_reduce(It it, It const last, size_t stride, std::random_access_iterator_tag) {
It out = it;
if (stride < 1) return last;
while (it < last)
{
*out++ = *it;
std::advance(it, stride);
}
return out;
}
// other iterator categories
template <typename It>
It strided_inplace_reduce(It it, It const last, size_t stride, ...) {
It out = it;
if (stride < 1) return last;
while (it != last) {
*out++ = *it;
for (size_t n = stride; n && it != last; --n)
{
it = std::next(it);
}
}
return out;
}
}
template <typename Range>
auto strided_inplace_reduce(Range& range, size_t stride) {
using std::begin;
using std::end;
using It = decltype(begin(range));
It it = begin(range), last = end(range);
return detail::strided_inplace_reduce(it, last, stride, typename std::iterator_traits<It>::iterator_category{});
}
#include <vector>
#include <list>
#include <iostream>
int main() {
{
std::vector<int> v { 1,2,3,4,5,6,7,8,9 };
v.erase(strided_inplace_reduce(v, 2), v.end());
std::copy(v.begin(), v.end(), std::ostream_iterator<int>(std::cout << "\nv: ", " "));
}
{
std::list<int> l { 1,2,3,4,5,6,7,8,9 };
l.erase(strided_inplace_reduce(l, 4), l.end());
std::copy(l.begin(), l.end(), std::ostream_iterator<int>(std::cout << "\nl: ", " "));
}
}
Prints
v: 1 3 5 7 9
l: 1 5 9
What you need is an interpolation. There are many libraries providing many types of interpolation. This one is very lightweight and easy to setup and run:
http://kluge.in-chemnitz.de/opensource/spline/
All you need to do is create the second vector that contains the X values, pass both vectors to generate spline, and generate interpolated results every 0.5 degrees or whatever:
std::vector<double> Y; // Y is your current vector of fuel combustion values with ~7k elements
std::vector<double> X;
X.reserve(Y.size());
double step_x = 360 / (double)Y.size();
for (int i = 0; i < X.size(); ++i)
X[i] = i*step_x;
tk::spline s;
s.set_points(X, Y);
double interpolation_step = 0.5;
std::vector<double> interpolated_results;
interpolated_results.reserve(std::ceil(360/interpolation_step) + 1);
for (double i = 0.0, int j = 0; i <= 360; i += interpolation_step, ++j) // <= in order to obtain range <0;360>
interpolated_results[j] = s(i);
if (fmod(360, interpolation_step) != 0.0) // for steps that don't divide 360 evenly, e.g. 0.7 deg, we need to close the range
interpolated_results.back() = s(360);
// now interpolated_results contain values every 0.5 degrees
This should give you and idea how to use this kind of libraries. If you need some other interpolation type, just find the one that suits your needs. The usage should be similar.
I'm trying to learn how to use lamba functions, and want to do something like:
Given a vector = {1,2,3,4,5}
I want the sum of pairwise sums = (1+2)+(2+3)+...
Below is my attempt, which is not working properly.
#include <vector>
#include <algorithm>
using namespace std;
vector <double> data = {1,10,100};
double mean = accumulate(data.begin(),data.end(),0.0);
double foo()
{
auto bar = accumulate(data.begin(),data.end(),0.0,[&](int k, int l){return (k+l);});
return bar
}
I tried changing the return statement to return (data.at(k)+data.at(l)), which didn't quite work.
Adding pairwise sums is the same as summing over everything twice except the first and last elements. No need for a fancy lambda.
auto result = std::accumulate(std::begin(data), std::end(data), 0.0)
* 2.0 - data.front() - data.end();
Or a little safer:
auto result = std::accumulate(std::begin(data), std::end(data), 0.0)
* 2.0 - (!data.empty() ? data.front() : 0) - (data.size() > 1 ? data.back() : 0);
If you insist on a lambda, you can move the doubling inside:
result = std::accumulate(std::begin(data), std::end(data), 0.0,
[](double lhs, double rhs){return lhs + 2.0*rhs;})
- data.front() - data.back();
Note that lhs within the lambda is the current sum, not the next two numbers in the sequence.
If you insist on doing all the work within the lambda, you can track an index by using generalized capture:
result = std::accumulate(std::begin(data), std::end(data), 0.0,
[currIndex = 0U, lastIndex = data.size()-1] (double lhs, double rhs) mutable
{
double result = lhs + rhs;
if (currIndex != 0 && currIndex != lastIndex)
result += rhs;
++currIndex;
return result;
});
Demo of all approaches
You misunderstand how std::accumulate works. Let's say you have int array[], then accumulate does:
int value = initial_val;
value = lambda( value, array[0] );
value = lambda( value, array[1] );
...
return value;
this is pseudo code, but it should be pretty easy to understand how it works. So in your case std::accumulate does not seem to be applicable. You may write a loop, or create your own special accumulate function:
auto lambda = []( int a, int b ) { return a + b; };
auto sum = 0.0;
for( auto it = data.begin(); it != data.end(); ++it ) {
auto itn = std::next( it );
if( itn == data.end() ) break;
sum += lambda( *it, *itn );
}
You could capture a variable in the lambda to keep the last value:
#include <vector>
#include <algorithm>
#include <numeric>
std::vector<double> data = {1,10,100};
double mean = accumulate(data.begin(), data.end(), 0.0);
double foo()
{
double last{0};
auto bar = accumulate(data.begin(), data.end(), 0.0, [&](auto k, auto l)
{
auto total = l + last;
last = l;
return total+k;
});
return bar;
}
int main()
{
auto val = foo();
}
You could use some sort of index, and add the next number.
size_t index = 1;
auto bar = accumulate(data.begin(), data.end(), 0.0, [&index, &data](double a, double b) {
if (index < data.size())
return a + b + data[index++];
else
return a + b;
});
Note you have a vector of doubles but are using ints to sum.
say I have a std::vector with N elements. I would like to copy every n-th element of it to a new vector, or average up to that element then copy it (downsample the original vector). So I want to do this
std::vector<double> vec(N);
long n = 4;
std::vector<double> ds(N/n);
for(long i = 0; i < ds.size(); i+=n)
{
ds[i] = vec[i*n];
}
or
for(long i = 0; i < ds.size(); i+=n)
{
double tmp = 0;
for(long j = 0; j < n; j++)
{
tmp += vec[i*n+j];
}
ds[i] = tmp/static_cast<double>(n);
}
Is there a way to do this using the standard algorithms of C++? Like using std::copy with binary functions? I have billions of elements that I want to treat this way, and I want this to be as fast as possible.
PS: I would prefer not to use external libraries such as boost.
For readability, the loop would be a good idea, as pointed out by Vlad in the comments. But if you really want to do someting like this, you could try:
int cnt=0,n=3;
vector<int> u(v.size()/3);
copy_if (v.begin(), v.end(), u.begin(),
[&cnt,&n] (int i)->bool {return ++cnt %n ==0; } );
If you want to average, it's getting worse as you'd have to similar tricks combining transform() with copy_if().
Edit:
If you're looking for performance, you'd better stick to the loop, as stressed in the comments by davidhigh: it will avoid the overhead of the call to the lambda function for each element.
If you're looking for an algorithm because you're doing this very often, you'd better write your own generic one.
You could write your own generic algorithms inspired from the design principles used in <algorithm>.
For the copy of every n elements:
template<class in_it, class out_it>
out_it copy_every_n( in_it b, in_it e, out_it r, size_t n) {
for (size_t i=distance(b,e)/n; i--; advance (b,n))
*r++ = *b;
return r;
}
Example of use:
vector<int> v {1,2,3,4,5,6,7,8,9,10};
vector<int> z(v.size()/3);
copy_every_n(v.begin(), v.end(), z.begin(), 3);
For averaging the elements n by n, you can use:
template<class in_it, class out_it>
out_it average_every_n( in_it b, in_it e, out_it r, size_t n) {
typename out_it::value_type tmp=0;
for (size_t cnt=0; b!=e; b++) {
tmp+=*b;
if (++cnt==n) {
cnt=0;
*r++=tmp/n;
tmp=0;
}
}
return r;
}
Example of use:
vector<int> w(v.size()/3);
average_every_n(v.begin(), v.end(), w.begin(), 3);
The advantage over your inital loops, is that this will work not only on vectors, but on any container providing the begin() and end() iterator. And it avoids overheads that I pointed out in my other answer.
If to use only standard library features and algorithms and if it is not allowed to use loops then the code can look the following way. Take into account that the code is based on the C++ 2014. If you need a code that will be compiled by a compiler that supports only C++ 2011 then you have to make some minor changes.
#include <iostream>
#include <vector>
#include <algorithm>
#include <numeric>
#include <iterator>
int main()
{
const size_t N = 4;
std::vector<double> src = { 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9 };
size_t n = src.size() / N;
std::vector<double> dst( n );
std::copy_if( src.begin(), std::next( src.begin(), n * N ), dst.begin(),
[i = 0] ( auto ) mutable { return ( i = ( i + 1 ) % N ) == 0; } );
for ( double x : dst ) std::cout << x << ' ';
std::cout << std::endl;
dst.assign( n, 0.0 );
std::accumulate( src.begin(), std::next( src.begin(), n * N ), dst.begin(),
[i = 0] ( auto acc, auto x ) mutable
{
*acc += x;
if ( ( i = ( i + 1 ) % N ) == 0 ) *acc++ /= N;
return acc;
} );
for ( double x : dst ) std::cout << x << ' ';
std::cout << std::endl;
}
The program output is
4.4 8.8
2.75 7.15
This compound expression in the if condition
if ( ( i = ( i + 1 ) % N ) == 0 ) *acc++ /= N;
you can replace with more simpler one
if ( ++i % N == 0 ) *acc++ /= N;
You may have explicitly stated that you prefer not to use Boost, but any non-Boost solution would really be implementing exactly this sort of thing anyway, so I'll show you how I would do it in Boost. Ultimately, I think you're better off writing a simple loop.
Downsampling uses strided
boost::copy(
input | strided(2),
std::back_inserter(output));
Downsampling average additionally uses transformed, though this solution is non-generic and specifically relies upon vector being contiguous:
boost::copy(
input | strided(2) | transformed([](auto& x){
return std::accumulate(&x, &x + 2, 0) / 2.;
}),
std::back_inserter(output));
Of course that has issues if the input isn't an exact multiple of the stride length, so it'd probably be better to do something like:
auto downsample_avg = [](auto& input, int n){
return input | strided(n) | transformed([&,n](auto& x){
auto begin = &x;
auto end = begin + std::min<size_t>(n, &input.back() - begin + 1);
return std::accumulate(begin, end, 0.0) / (end - begin);
});
};
boost::copy(
downsample_avg(input, 2),
std::back_inserter(output));
how about this implemention?
#include <iterator>
template<typename InputIt, typename OutputIt>
OutputIt DownSample(InputIt first, InputIt last, OutputIt d_first,
typename std::iterator_traits<InputIt>::difference_type n) {
while (first < last) {
*(d_first++) = *first;
std::advance(first, n);
}
return d_first;
}
Is there a way to calculate mean and standard deviation for a vector containing samples using Boost?
Or do I have to create an accumulator and feed the vector into it?
I don't know if Boost has more specific functions, but you can do it with the standard library.
Given std::vector<double> v, this is the naive way:
#include <numeric>
double sum = std::accumulate(v.begin(), v.end(), 0.0);
double mean = sum / v.size();
double sq_sum = std::inner_product(v.begin(), v.end(), v.begin(), 0.0);
double stdev = std::sqrt(sq_sum / v.size() - mean * mean);
This is susceptible to overflow or underflow for huge or tiny values. A slightly better way to calculate the standard deviation is:
double sum = std::accumulate(v.begin(), v.end(), 0.0);
double mean = sum / v.size();
std::vector<double> diff(v.size());
std::transform(v.begin(), v.end(), diff.begin(),
std::bind2nd(std::minus<double>(), mean));
double sq_sum = std::inner_product(diff.begin(), diff.end(), diff.begin(), 0.0);
double stdev = std::sqrt(sq_sum / v.size());
UPDATE for C++11:
The call to std::transform can be written using a lambda function instead of std::minus and std::bind2nd(now deprecated):
std::transform(v.begin(), v.end(), diff.begin(), [mean](double x) { return x - mean; });
If performance is important to you, and your compiler supports lambdas, the stdev calculation can be made faster and simpler: In tests with VS 2012 I've found that the following code is over 10 X quicker than the Boost code given in the chosen answer; it's also 5 X quicker than the safer version of the answer using standard libraries given by musiphil.
Note I'm using sample standard deviation, so the below code gives slightly different results (Why there is a Minus One in Standard Deviations)
double sum = std::accumulate(std::begin(v), std::end(v), 0.0);
double m = sum / v.size();
double accum = 0.0;
std::for_each (std::begin(v), std::end(v), [&](const double d) {
accum += (d - m) * (d - m);
});
double stdev = sqrt(accum / (v.size()-1));
Using accumulators is the way to compute means and standard deviations in Boost.
accumulator_set<double, stats<tag::variance> > acc;
for_each(a_vec.begin(), a_vec.end(), bind<void>(ref(acc), _1));
cout << mean(acc) << endl;
cout << sqrt(variance(acc)) << endl;
Improving on the answer by musiphil, you can write a standard deviation function without the temporary vector diff, just using a single inner_product call with the C++11 lambda capabilities:
double stddev(std::vector<double> const & func)
{
double mean = std::accumulate(func.begin(), func.end(), 0.0) / func.size();
double sq_sum = std::inner_product(func.begin(), func.end(), func.begin(), 0.0,
[](double const & x, double const & y) { return x + y; },
[mean](double const & x, double const & y) { return (x - mean)*(y - mean); });
return std::sqrt(sq_sum / func.size());
}
I suspect doing the subtraction multiple times is cheaper than using up additional intermediate storage, and I think it is more readable, but I haven't tested the performance yet.
It seems the following elegant recursive solution has not been mentioned, although it has been around for a long time. Referring to Knuth's Art of Computer Programming,
mean_1 = x_1, variance_1 = 0; //initial conditions; edge case;
//for k >= 2,
mean_k = mean_k-1 + (x_k - mean_k-1) / k;
variance_k = variance_k-1 + (x_k - mean_k-1) * (x_k - mean_k);
then for a list of n>=2 values, the estimate of the standard deviation is:
stddev = std::sqrt(variance_n / (n-1)).
Hope this helps!
My answer is similar as Josh Greifer but generalised to sample covariance. Sample variance is just sample covariance but with the two inputs identical. This includes Bessel's correlation.
template <class Iter> typename Iter::value_type cov(const Iter &x, const Iter &y)
{
double sum_x = std::accumulate(std::begin(x), std::end(x), 0.0);
double sum_y = std::accumulate(std::begin(y), std::end(y), 0.0);
double mx = sum_x / x.size();
double my = sum_y / y.size();
double accum = 0.0;
for (auto i = 0; i < x.size(); i++)
{
accum += (x.at(i) - mx) * (y.at(i) - my);
}
return accum / (x.size() - 1);
}
2x faster than the versions before mentioned - mostly because transform() and inner_product() loops are joined.
Sorry about my shortcut/typedefs/macro: Flo = float. CR const ref. VFlo - vector. Tested in VS2010
#define fe(EL, CONTAINER) for each (auto EL in CONTAINER) //VS2010
Flo stdDev(VFlo CR crVec) {
SZ n = crVec.size(); if (n < 2) return 0.0f;
Flo fSqSum = 0.0f, fSum = 0.0f;
fe(f, crVec) fSqSum += f * f; // EDIT: was Cit(VFlo, crVec) {
fe(f, crVec) fSum += f;
Flo fSumSq = fSum * fSum;
Flo fSumSqDivN = fSumSq / n;
Flo fSubSqSum = fSqSum - fSumSqDivN;
Flo fPreSqrt = fSubSqSum / (n - 1);
return sqrt(fPreSqrt);
}
In order to calculate the sample mean with a better presicion the following r-step recursion can be used:
mean_k=1/k*[(k-r)*mean_(k-r) + sum_over_i_from_(n-r+1)_to_n(x_i)],
where r is chosen to make summation components closer to each other.
Create your own container:
template <class T>
class statList : public std::list<T>
{
public:
statList() : std::list<T>::list() {}
~statList() {}
T mean() {
return accumulate(begin(),end(),0.0)/size();
}
T stddev() {
T diff_sum = 0;
T m = mean();
for(iterator it= begin(); it != end(); ++it)
diff_sum += ((*it - m)*(*it -m));
return diff_sum/size();
}
};
It does have some limitations, but it works beautifully when you know what you are doing.
//means deviation in c++
/A deviation that is a difference between an observed value and the true value of a quantity of interest (such as a population mean) is an error and a deviation that is the difference between the observed value and an estimate of the true value (such an estimate may be a sample mean) is a residual. These concepts are applicable for data at the interval and ratio levels of measurement./
#include <iostream>
#include <conio.h>
using namespace std;
/* run this program using the console pauser or add your own getch, system("pause") or input loop */
int main(int argc, char** argv)
{
int i,cnt;
cout<<"please inter count:\t";
cin>>cnt;
float *num=new float [cnt];
float *s=new float [cnt];
float sum=0,ave,M,M_D;
for(i=0;i<cnt;i++)
{
cin>>num[i];
sum+=num[i];
}
ave=sum/cnt;
for(i=0;i<cnt;i++)
{
s[i]=ave-num[i];
if(s[i]<0)
{
s[i]=s[i]*(-1);
}
cout<<"\n|ave - number| = "<<s[i];
M+=s[i];
}
M_D=M/cnt;
cout<<"\n\n Average: "<<ave;
cout<<"\n M.D(Mean Deviation): "<<M_D;
getch();
return 0;
}