I am struggling to make this equation equals to each other because of a bad understanding of mathematics.
The problem is that the equation does not equal to each other
here is my code for better understand
#include <iostream>
#include <ccomplex>
using std::cout;
int main() {
int n = 8;
double sum = 0.0;
unsigned long long fact =1;
for (int i = 1; i <= n; i++)
{
fact *= 2*i*(2*i-1);
sum += 1.0 / fact;
}
std::cout << "first equation " << sum << std::endl;
double e = M_E;
double st = 1.0/2.0*(e + (1.0/e));
std::cout <<"second equation " << st << std::endl;
return 0;
}
the output
first equation 0.543081
second equation 1.54308
The result it nearly It must be at least equal before the comma,
You don't account for n = 0, which yields 0! and thus 1. Therefore, you need to add 1 to sum.
am trying to solve the equation in c++ sum of series and am getting the right result
but I just trying to use the if statement to get the same answer of this equation
but always am getting struggling with it
# include <math.h>
#include <iostream>
using namespace std;
int main()
{
double x = 1;
double som = 0;
double lim_nbr = pow(10.0, -6);
int n = 1;
do{
x = 1.0 / ((n*n*4.0 - 1) * n);
som += x;
n+=1;
}while (x >= lim_nbr);
double correctSum = 2.0*log(2.0) -1.0 ;
cout << "Sum = " << som << endl;
cout << "Sumcorrect = " << correctSum << endl;
}
In this case for you to calculate a loop shape using only if, an alternative is to use recursive functions, look at this example:
#include <math.h>
#include <iostream>
using namespace std;
double calc(double lim_nbr, double som, double x, int n)
{
if(x >= lim_nbr || som == 0)
{
x = 1.0 / ((n*n*4.0 - 1) * n);
som += x;
n+=1;
calc(lim_nbr, som, x, n);
}
else
{
return som;
}
}
int main()
{
double lim_nbr = pow(10.0, -6);
/* Call the function with the initial values */
double som = calc(lim_nbr, 0, 1, 1);
cout << "SumWithIf = " << som <<endl;
}
The result you are getting using the do while loop is an approximation to the exact value that you are getting in correctSum. correctSum is the result obtained by adding the series upto infinte terms, however your do while loop calculates only up to a finite number of terms. Therefore the difference of the two values shows up as the error.
I have a network with two inputs, two hidden nodes in a single layer, and an output node.
I am trying to solve XOR problem:
| i0 | i1 | desired output |
----------------------------
| 0 | 0 | 0 |
| 1 | 0 | 1 |
| 0 | 1 | 1 |
| 1 | 1 | 0 |
With my current code, I am running all 4 records above in a single epoch. I then repeat the epoch 20,000 times. I calculate the error after each record, not each epoch, and I back-propagate the error at this same time.
I use only sigmoid in the output layer, as I understand I want a result between 0 and 1.
My network, most of the time, converges. Other times, it doesn't.
I have tried using both sigmoid and tanh in the hidden layer, but neither seems to guarantee convergence.
I have tried randomly generating weights between 0 and 1 as well as between -1 and 1 using a uniform distribution. I have tried using Xavier Initialisation as both uniform and normal distribution. None of these seems to prevent the network from failing to converge. I have tried different combinations of activation function and weight generation.
Here is my complete code:
#include <iostream>
#include <array>
#include <random>
#include <chrono>
#include <iomanip>
#include <fstream>
#include <algorithm>
#include <iomanip>
typedef float DataType;
typedef DataType (*ActivationFuncPtr)(const DataType&);
const DataType learningRate = std::sqrt(2.f);
const DataType momentum = 0.25f;
const std::size_t numberEpochs = 20000;
DataType sigmoid(const DataType& x)
{
return DataType(1) / (DataType(1) + std::exp(-x));
}
DataType sigmoid_derivative(const DataType& x)
{
return x * (DataType(1) - x);
}
DataType relu(const DataType& x)
{
return x <= 0 ? 0 : x;
}
DataType relu_derivative(const DataType& x)
{
return x <= 0 ? 0 : 1;
}
DataType tanh(const DataType& x)
{
return std::tanh(x);
}
DataType tanh_derivative(const DataType& x)
{
return DataType(1) - x * x;
}
DataType leaky_relu(const DataType& x)
{
return x <= 0 ? DataType(0.01) * x : x;
}
DataType leaky_relu_derivative(const DataType& x)
{
return x <= 0 ? DataType(0.01) : 1;
}
template<std::size_t NumInputs>
class Neuron
{
public:
Neuron(ActivationFuncPtr activationFunction, ActivationFuncPtr derivativeFunc)
:
m_activationFunction(activationFunction),
m_derivativeFunction(derivativeFunc)
{
RandomiseWeights();
}
void RandomiseWeights()
{
std::generate(m_weights.begin(),m_weights.end(),[&]()
{
return m_xavierNormalDis(m_mt);
});
m_biasWeight = m_xavierNormalDis(m_mt);
for(std::size_t i = 0; i < NumInputs+1; ++i)
m_previousWeightUpdates[i] = 0;
}
void FeedForward(const std::array<DataType,NumInputs>& inputValues)
{
DataType sum = m_biasWeight;
for(std::size_t i = 0; i < inputValues.size(); ++i)
sum += inputValues[i] * m_weights[i];
m_output = m_activationFunction(sum);
m_netInput = sum;
}
DataType GetOutput() const
{
return m_output;
}
DataType GetNetInput() const
{
return m_netInput;
}
std::array<DataType,NumInputs> Backpropagate(const DataType& error,
const std::array<DataType,NumInputs>& inputValues,
std::array<DataType,NumInputs+1>& weightAdjustments)
{
DataType errorOverOutput = error;
DataType outputOverNetInput = m_derivativeFunction(m_output);
std::array<DataType,NumInputs> netInputOverWeight;
for(std::size_t i = 0; i < NumInputs; ++i)
{
netInputOverWeight[i] = inputValues[i];
}
DataType netInputOverBias = DataType(1);
std::array<DataType,NumInputs> errorOverWeight;
for(std::size_t i = 0; i < NumInputs; ++i)
{
errorOverWeight[i] = errorOverOutput * outputOverNetInput * netInputOverWeight[i];
}
DataType errorOverBias = errorOverOutput * outputOverNetInput * netInputOverBias;
for(std::size_t i = 0; i < NumInputs; ++i)
{
weightAdjustments[i] = errorOverWeight[i];
}
weightAdjustments[NumInputs] = errorOverBias;
DataType errorOverNetInput = errorOverOutput * outputOverNetInput;
std::array<DataType,NumInputs> errorWeights;
for(std::size_t i = 0; i < NumInputs; ++i)
{
errorWeights[i] = errorOverNetInput * m_weights[i];
}
return errorWeights;
}
void AdjustWeights(const std::array<DataType,NumInputs+1>& adjustments)
{
for(std::size_t i = 0; i < NumInputs; ++i)
{
m_weights[i] = m_weights[i] - learningRate * adjustments[i] + momentum * m_previousWeightUpdates[i];
m_previousWeightUpdates[i] = learningRate * adjustments[i] + momentum * m_previousWeightUpdates[i];
}
m_biasWeight = m_biasWeight - learningRate * adjustments[NumInputs] + momentum * m_previousWeightUpdates[NumInputs];
m_previousWeightUpdates[NumInputs] = learningRate * adjustments[NumInputs] + momentum * m_previousWeightUpdates[NumInputs];
}
const std::array<DataType,NumInputs>& GetWeights() const { return m_weights; }
const DataType& GetBiasWeight() const { return m_biasWeight; }
protected:
static std::mt19937 m_mt;
static std::uniform_real_distribution<DataType> m_uniformDisRandom;
static std::uniform_real_distribution<DataType> m_xavierUniformDis;
static std::normal_distribution<DataType> m_xavierNormalDis;
std::array<DataType,NumInputs> m_weights;
DataType m_biasWeight;
ActivationFuncPtr m_activationFunction;
ActivationFuncPtr m_derivativeFunction;
DataType m_output;
DataType m_netInput;
std::array<DataType,NumInputs+1> m_previousWeightUpdates;
};
template<std::size_t NumInputs>
std::mt19937 Neuron<NumInputs>::m_mt(std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count());
template<std::size_t NumInputs>
std::uniform_real_distribution<DataType> Neuron<NumInputs>::m_uniformDisRandom(-1,1);
template<std::size_t NumInputs>
std::uniform_real_distribution<DataType> Neuron<NumInputs>::m_xavierUniformDis(-std::sqrt(6.f / NumInputs+1),std::sqrt(6.f / NumInputs+1));
template<std::size_t NumInputs>
std::normal_distribution<DataType> Neuron<NumInputs>::m_xavierNormalDis(0,std::sqrt(2.f / NumInputs+1));
main()
{
std::ofstream file("error_out.csv", std::ios::out | std::ios::trunc);
if(!file.is_open())
{
std::cout << "couldn't open file" << std::endl;
return 0;
}
file << std::fixed << std::setprecision(80);
std::array<std::array<DataType,2>,4> inputData = {{{0,0},{0,1},{1,0},{1,1}}};
std::array<std::array<DataType,1>,4> desiredOutputs = {{{0},{1},{1},{0}}};
std::array<Neuron<2>*,2> hiddenLayer1 =
{{
new Neuron<2>(tanh, tanh_derivative),
new Neuron<2>(tanh, tanh_derivative)
}};
std::array<Neuron<2>*,1> outputLayer =
{{
new Neuron<2>(sigmoid, sigmoid_derivative)
}};
std::cout << std::fixed << std::setprecision(80);
std::cout << "Initial Weights: " << std::endl;
const std::array<DataType,2>& outputWeights = outputLayer[0]->GetWeights();
const DataType& outputBias = outputLayer[0]->GetBiasWeight();
const std::array<DataType,2>& hidden1Weights = hiddenLayer1[0]->GetWeights();
const DataType& hidden1Bias = hiddenLayer1[0]->GetBiasWeight();
const std::array<DataType,2>& hidden2Weights = hiddenLayer1[1]->GetWeights();
const DataType& hidden2Bias = hiddenLayer1[1]->GetBiasWeight();
std::cout << "W0: " << hidden1Weights[0] << "\n"
<< "W1: " << hidden1Weights[1] << "\n"
<< "B0: " << hidden1Bias << "\n"
<< "W2: " << hidden2Weights[0] << "\n"
<< "W3: " << hidden2Weights[1] << "\n"
<< "B1: " << hidden2Bias << "\n"
<< "W4: " << outputWeights[0] << "\n"
<< "W5: " << outputWeights[1] << "\n"
<< "B2: " << outputBias << "\n" << std::endl;
DataType finalMSE = 0;
std::size_t epochNumber = 0;
while(epochNumber < numberEpochs)
{
DataType epochMSE = 0;
for(std::size_t row = 0; row < inputData.size(); ++row)
{
const std::array<DataType,2>& dataRow = inputData[row];
const std::array<DataType,1>& outputRow = desiredOutputs[row];
// Feed the values through to the output layer
hiddenLayer1[0]->FeedForward(dataRow);
hiddenLayer1[1]->FeedForward(dataRow);
DataType output0 = hiddenLayer1[0]->GetOutput();
DataType output1 = hiddenLayer1[1]->GetOutput();
outputLayer[0]->FeedForward({output0,output1});
DataType finalOutput0 = outputLayer[0]->GetOutput();
// if there was more than 1 output neuron these errors need to be summed together first to create total error
DataType totalError = 0.5 * std::pow(outputRow[0] - finalOutput0,2.f);
epochMSE += totalError * totalError;
DataType propagateError = -(outputRow[0] - finalOutput0);
std::array<DataType,3> weightAdjustmentsOutput;
std::array<DataType,2> outputError = outputLayer[0]->Backpropagate(propagateError,
{output0,output1},
weightAdjustmentsOutput);
std::array<DataType,3> weightAdjustmentsHidden1;
hiddenLayer1[0]->Backpropagate(outputError[0],dataRow,weightAdjustmentsHidden1);
std::array<DataType,3> weightAdjustmentsHidden2;
hiddenLayer1[1]->Backpropagate(outputError[1],dataRow,weightAdjustmentsHidden2);
outputLayer[0]->AdjustWeights(weightAdjustmentsOutput);
hiddenLayer1[0]->AdjustWeights(weightAdjustmentsHidden1);
hiddenLayer1[1]->AdjustWeights(weightAdjustmentsHidden2);
}
epochMSE *= DataType(1) / inputData.size();
file << epochNumber << "," << epochMSE << std::endl;
finalMSE = epochMSE;
++epochNumber;
}
std::cout << std::fixed << std::setprecision(80)
<< "\n\n====================================\n"
<< " TRAINING COMPLETE"
<< "\n\n====================================" << std::endl;
std::cout << "Final Error: " << finalMSE << std::endl;
std::cout << "Number epochs: " << epochNumber << "/" << numberEpochs << std::endl;
// output tests
std::cout << std::fixed << std::setprecision(2)
<< "\n\n====================================\n"
<< " FINAL TESTS"
<< "\n\n====================================" << std::endl;
for(std::size_t row = 0; row < inputData.size(); ++row)
{
const std::array<DataType,2>& dataRow = inputData[row];
const std::array<DataType,1>& outputRow = desiredOutputs[row];
std::cout << dataRow[0] << "," << dataRow[1] << " (" << outputRow[0] << ") : ";
// Feed the values through to the output layer
hiddenLayer1[0]->FeedForward(dataRow);
hiddenLayer1[1]->FeedForward(dataRow);
DataType output0 = hiddenLayer1[0]->GetOutput();
DataType output1 = hiddenLayer1[1]->GetOutput();
outputLayer[0]->FeedForward({output0,output1});
DataType finalOutput0 = outputLayer[0]->GetOutput();
std::cout << finalOutput0 << std::endl;
}
file.close();
return 0;
}
When things are working, I get an output like:
====================================
FINAL TESTS
====================================
0.00,0.00 (0.00) : 0.00
0.00,1.00 (1.00) : 0.99
1.00,0.00 (1.00) : 0.99
1.00,1.00 (0.00) : 0.00
When it's not working I get an output like:
====================================
FINAL TESTS
====================================
0.00,0.00 (0.00) : 0.57
0.00,1.00 (1.00) : 0.57
1.00,0.00 (1.00) : 1.00
1.00,1.00 (0.00) : 0.00
When it's working, the error for each epoch looks like:
The initial weights were:
W0: -0.47551780939102172851562500000000000000000000000000000000000000000000000000000000
W1: 0.40949764847755432128906250000000000000000000000000000000000000000000000000000000
B0: 2.33756542205810546875000000000000000000000000000000000000000000000000000000000000
W2: 2.16713166236877441406250000000000000000000000000000000000000000000000000000000000
W3: -2.74766492843627929687500000000000000000000000000000000000000000000000000000000000
B1: 0.34863436222076416015625000000000000000000000000000000000000000000000000000000000
W4: -0.53460156917572021484375000000000000000000000000000000000000000000000000000000000
W5: 0.04940851405262947082519531250000000000000000000000000000000000000000000000000000
B2: 0.97842389345169067382812500000000000000000000000000000000000000000000000000000000
But when it doesn't work, the error for each epoch looks like:
the initial weights in this particular one was:
W0: 1.16670060157775878906250000000000000000000000000000000000000000000000000000000000
W1: -2.37987256050109863281250000000000000000000000000000000000000000000000000000000000
B0: 0.41097882390022277832031250000000000000000000000000000000000000000000000000000000
W2: -0.23449644446372985839843750000000000000000000000000000000000000000000000000000000
W3: -1.99990248680114746093750000000000000000000000000000000000000000000000000000000000
B1: 1.77582693099975585937500000000000000000000000000000000000000000000000000000000000
W4: 1.98818421363830566406250000000000000000000000000000000000000000000000000000000000
W5: 2.71223402023315429687500000000000000000000000000000000000000000000000000000000000
B2: -0.79067271947860717773437500000000000000000000000000000000000000000000000000000000
I see nothing really telling about these weights that can help me generate good starting weights (which is what I believe the problem to be, regardless of the activation function used).
Question: What can I do to ensure convergence occurs?
Do I need to change the weight initialisation?
Do I need to use different activation functions?
Do I need more layers or a different number of nodes?
I haven't read all your code because it is quite long, but:
It would be nice to have a NeuralNetwork class and a Connection class eventually to avoid writing all the logic in main.
I like the ActivationFuncPtr typedef which you could use to try and mixup different activation functions for different Neurons (maybe with a genetic algorithm)?
Now, to answer your question, there are really no definitive answers, but I can give you a few advice:
Initializing with a predetermined set of weights should indeed help prevent falling into a local minimum. You could try different sets of weights and see which ones work best and what happens when you change a specific one(you're doing supervised learning anyway.) If you are doing research, it would give you a few free paragraphs ;)
Different activation functions usually don't help much with convergence, but it is worth a try. You could adjust sigmoid to 1/(1+exp(-4*x)), 4 being arbitrary for instance.
XOR has been solved with less nodes than that (see Neat Paper, a Neuro Evolution NN). Increasing the number of nodes could make it even harder to converge.
One (dirty) way to prevent early convergence would be to detect that if you have fallen in a local minimum, then restart with new random weights.
Another way, would be to use a genetic algorithm (I'm a bit biased because it is my field of study.)
I'm trying to write a very simple C++ program which outputs a lookup table with the corresponding x and y values of sinus function. The code that I wrote is the following:
#include "stdafx.h"
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
double hw = 4.0;
int nsteps = 30;
const double PI = 3.14159;
const double maxx = hw * PI;
const double deltax = maxx / nsteps;
double x = 0.0;
for (int i = 0; i < nsteps; i++) {
const double f = sin(x);
cerr << x << "\t" << f << endl;
x = x + deltax;
}
return 0;
}
Now the program is working, but my problem is, that the values are not getting aligned properly as showed in the following picture
So is there any way, to achieve that the second column of the values will actually be a column and all the values are aligned at the same position? What could I use instead of \t?
The above answer provides an incorrect solution because the alignment is not set correctly. I would use a function to handle the formatting:
#include "stdafx.h"
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
void printxy(double x, double y, int width){
cout << setw(width) << x << "\t";
if (y < 0) cout << "\b";
cout << setw(width) << y << "\n";
}
int main(){
double hw = 4.0;
int nsteps = 30;
const double PI = 3.14159;
const double maxx = hw * PI;
const double deltax = maxx / nsteps;
double x = 0.0;
int decimals = 6;
int width = 8; //Adjust as needed for large numbers/many decimals
cout << std::setprecision(decimals);
cout << std::setw(width);
cout.setf(ios::left);
for (int i = 0; i < nsteps; i++) {
const double y = sin(x);
printxy(x, y, width);
x = x + deltax;
}
}
The output is now formatted correctly:
0 0
0.418879 0.406736
0.837757 0.743144
1.25664 0.951056
1.67551 0.994522
2.09439 0.866026
2.51327 0.587787
2.93215 0.207914
3.35103 -0.207909
3.76991 -0.587783
4.18879 -0.866024
4.60767 -0.994521
5.02654 -0.951058
5.44542 -0.743148
5.8643 -0.406741
6.28318 -5.30718e-06
6.70206 0.406731
7.12094 0.743141
7.53982 0.951055
7.95869 0.994523
8.37757 0.866029
8.79645 0.587791
9.21533 0.207919
9.63421 -0.207904
10.0531 -0.587778
10.472 -0.866021
10.8908 -0.994521
11.3097 -0.951059
11.7286 -0.743151
12.1475 -0.406746
I would also discourage the use of cerr for these kinds of printing operations. It is intended for printing errors. Use cout instead (it works the same way for all practical purposes).
I should also mention that endl is a ticking bomb: it flushes the output, meaning that the internal buffer of the stream is written out (be it the console, a file or whatever). When applications scale and become more IO intensive, this can become a significant performance problem: the buffer that is intended to increase the IO performance is potentially unused due to frequent endl insertions. The solution is to use the newline character '\n'.
Use std::setprecision() to set count number for decimal after point, and std::setw() to set width of output length. Include <iomanip> needed, example:
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
int main()
{
double hw = 4.0;
int nsteps = 30;
const double PI = 3.14159;
const double maxx = hw * PI;
const double deltax = maxx / nsteps;
double x = 0.0;
cerr << std::setprecision(8);
for (int i = 0; i < nsteps; i++) {
const double f = sin(x);
cerr << std::setw(20) << x << std::setw(20) << f << endl;
x = x + deltax;
}
return 0;
}
Output is:
0 0
0.41887867 0.40673632
0.83775733 0.74314435
1.256636 0.95105619
1.6755147 0.99452204
2.0943933 0.86602629
2.513272 0.58778697
2.9321507 0.20791411
3.3510293 -0.20790892
3.769908 -0.58778268
4.1887867 -0.86602363
//...
Hi We are supposed to model the height and velocity of a rocket in c++ for our final project. Having user input the total flight time and delta time for the points during flight that they wish to measure. The following is the code I have written for this project. The velocity is supposed to start positive and after 60 seconds at which point there is no fuel left and thus no thrust the velocity should start becoming negative. However both my height and velocity are coming out as negative from the start and reaching negative infinite by the end.
#include <iostream>
using namespace std;
int main()
{
float *v;
float *h;
float a;
double mass=0.0, thrust, time, dt;
double g = 32.2;
double K = 0.008;
cout << "enter time";
cin >> time;
cout << "enter dt";
cin >> dt;
a = (time/dt);
v = new float[a];
h = new float[a];
v[0] = 0;
h[0] = 0;
float tt = 0;
// for loop to calculate velocity and time
for(int i = 0; i <= (time/dt) ; i++)
{
tt = dt + tt;
if( tt <= 60)
{
mass = (3000-(40*tt)/g);
thrust = 7000;
}
if ( tt > 60)
{
mass = 3000/g;
thrust = 0;
}
// these are the formulas for velocity and height position our prof gave us
v[i+1] = v[i] - (K/mass)*v[i]*v[i-1] * dt + (thrust/mass - g)*dt;
h[i+1] = v[i+1] * dt + h[i];
}
// for loop to output
for(int i = 0; i <= (time/dt); i++)
{
cout << i << " - " << "Velocity:" << v[i+1] << " Position:" << h[i+1] <<endl;
}
return 0;
}
sample output:
enter time120
enter dt.01
0 - Velocity:-0.298667 Position:-0.00298667
1 - Velocity:-0.597333 Position:-0.00896
2 - Velocity:-0.895999 Position:-0.01792
3 - Velocity:-1.19467 Position:-0.0298666
4 - Velocity:-1.49333 Position:-0.0448
5 - Velocity:-1.792 Position:-0.0627199
6 - Velocity:-2.09066 Position:-0.0836266
7 - Velocity:-2.38933 Position:-0.10752
<...i left out a lot of numbers in the middle to not make this post too long...>
11994 - Velocity:-inf Position:-inf
11995 - Velocity:-inf Position:-inf
11996 - Velocity:-inf Position:-inf
11997 - Velocity:-inf Position:-inf
11998 - Velocity:-inf Position:-inf
11999 - Velocity:-inf Position:-inf
12000 - Velocity:-inf Position:-inf
Program ended with exit code: 0
I have compared with my friends who are getting good results and we can not determine a difference between their code and mine. I have the rest of my program complete and working fine I just cannot figure out why my calculations are wrong
Ignoring the out-of-bounds access to v[-1] when i is zero, there is something wrong with your thrust, mass, or g.
thrust is 7000, mass is 3000 at time = 0. That means thrust/mass is just over 2. With g=32 (really? you are doing rocketry calculations in imperial units?), that means the rocket never has enough thrust to counter gravity, and just sits on the pad.
Edit: That would be reality. Because this is a fairly simple simulation, and doesn't include a "pad", in the model the rocket starts free-falling to the centre of the earth.
You are using v[i-1] but i starts out at 0, therefore this calculation is going to use whatever happens to be at v[-1]. I suggest you initialize i to 1 (and then check all the uses of i to ensure that the correct array elements will be used).
I am not 100% convinced about the forumla, I dont understand why it contains a v[i] and a v[i-1] term. Anyhow, even if correct, in the first iteration (i==0) you are accesing out of bounds of the velocity array: v[i-1]. That is undefined behaviour.
To fix this, either review the formula, does it really contain a v[i-1] term? ..or start the iteration at i=1 (and initialize v[0] and v[1]).
Thank you guys for your help I was able to solve it.
#include <iostream>
using namespace std;
int main()
{
double *v;
double *h;
double g = 32.2;
double K = .008;
double mass;
double t;
double dt;
double tt = 0;
double thrust;
cout << "t \n";
cin >> t;
cout << "dt \n";
cin >> dt;
double a = t/dt;
v = new double[a];
h = new double[a];
v[0] = 0;
h[0] = 0;
for (int i = 0; i <= a; i++)
{
tt = tt+dt;
if( tt == 0)
{
thrust = 7000;
mass = (3000 -40*dt)/g;
v[i+1] = v[i] + ((thrust/mass)-g)*dt;
}
else if( tt > 0 && tt < 60)
{
thrust = 7000;
mass = (3000 - 40 *tt)/g;
v[i+1] = v[i] - ((K/mass)*v[i]*v[i-1] * dt) + ((thrust/mass) - g)*dt;
}
else if (tt > 60)
{
thrust = 0;
mass = 600/g;
v[i+1] = v[i] - ((K/mass)*v[i]*v[i-1] * dt) + ((thrust/mass) - g)*dt;
}
h[i+1] = v[i+1] * dt + h[i];
}
cout << " end results \n";
for(int i = 0; i <= a; i++)
{
cout << i << " v - " << v[i] << " h - " << h[i] <<endl;
}
return 0;
}
New results:
t
120
dt
.01
end results
0 v - 0 h - 0
1 v - 0.429434 h - 0.00429434
2 v - 0.858967 h - 0.012884
3 v - 1.2886 h - 0.02577
4 v - 1.71833 h - 0.0429534
5 v - 2.14817 h - 0.064435
6 v - 2.5781 h - 0.090216
7 v - 3.00813 h - 0.120297
You can see below at 60s where the velocity changes due to no more thrust
5997 - Velocity:890.361
5998 - Velocity:890.392
5999 - Velocity:890.422
6000 - Velocity:886.697
6001 - Velocity:882.985
6002 - Velocity:879.302