c++ - The Value Changing Before I Save It - c++

I'm reading a data from a game (SharedMemoryMapped) everything works good,
i got a time of a lap (in milliseconds), getting the seconds = Milliseconds / 1000f,
after that i'm tracking the position of the car until 1.000f which means the end and i need to save some data to the file,
but the lapTimeInSeconds is keep changing in the end of the loop before i get the values i need (Minutes: Seconds: MilliSeconds)
while (true) {
//........code........
// laptime
float lapTimeInSeconds = (float) ((float)graphic.iCurrentTime / 1000.0f);
printf("\rLAP: %.3f", lapTimeInSeconds);
// save
memcpy(&save_struct.lapTime_seconds, &lapTimeInSeconds, 4);
memcpy(&save_struct.normalizedCarPosition, &graphic.normalizedCarPosition, 4);
memcpy(&save_struct.gaz, &physics.gaz, 4);
memcpy(&save_struct.brake, &physics.brake, 4);
memcpy(&save_struct.speedKmh, &physics.speedKmh, 4);
memcpy(&save_struct.steerAngle, &physics.steerAngle, 4);
memcpy(&save_struct.gear, &physics.gear, 4);
memcpy(&save_struct.carCoordiantes, &graphic.carCoordiantes, sizeof(save_struct.carCoordiantes));
// write bytes into the file
fwrite(&save_struct, sizeof(SaveStruct), 1, saveFile);
if (graphic.normalizedCarPosition == 1) {
fclose(saveFile);
printf("\nFINAL LAP TIME: %.3f\n", lapTimeInSeconds); // HERE IS THE PROBLEM IT RESET TO 0.000 seconds, the VALUE GOT CHANGED AND SOMETIMES 0.654s
uint8_t seconds = (int) lapTimeInSeconds % 60;
uint8_t minutes = lapTimeInSeconds / 60;
uint16_t millisecond = (lapTimeInSeconds - ((minutes * 60) + seconds)) * 1000;
std::string final_file_name = "laps/lap-" + std::to_string(lapCount) + "-" + std::to_string(minutes) + "-" + std::to_string(seconds) + "-" + std::to_string(millisecond) + ".lap";
rename("laps/lap-not-completed.lap", final_file_name.c_str());
lapCount++;
saveFile = fopen("laps/lap-not-completed.lap", "wb");
}
}
The File name becomes 0 minutes, 0 seconds, 659 milliseconds
How to make lapTimeInSeconds not changing until i save the file ?!
Thank you.

The problem was :
the car position ( 1.000 ) means end of the lap, but the time got reset in (0.996)
so i had to change the condition..
#tadman thanks to you i changed:
// save
memcpy(&save_struct.lapTime_seconds, &lapTimeInSeconds, 4);
memcpy(&save_struct.normalizedCarPosition, &graphic.normalizedCarPosition, 4);
memcpy(&save_struct.gaz, &physics.gaz, 4);
memcpy(&save_struct.brake, &physics.brake, 4);
memcpy(&save_struct.speedKmh, &physics.speedKmh, 4);
memcpy(&save_struct.steerAngle, &physics.steerAngle, 4);
memcpy(&save_struct.gear, &physics.gear, 4);
memcpy(&save_struct.carCoordiantes, &graphic.carCoordiantes, sizeof(save_struct.carCoordiantes));
To:
save_struct.lapTime_seconds = (float) ((float)graphic.iCurrentTime / 1000.0f);
save_struct.normalizedCarPosition = graphic.normalizedCarPosition;
save_struct.gaz = physics.gaz;
save_struct.brake = physics.brake;
save_struct.speedKmh = physics.speedKmh;
save_struct.steerAngle = physics.steerAngle;
save_struct.gear = physics.gear;
save_struct.carCoordiantes[0][0] = graphic.carCoordiantes[0][0]; // x
save_struct.carCoordiantes[0][1] = graphic.carCoordiantes[0][1]; // y
save_struct.carCoordiantes[0][2] = graphic.carCoordiantes[0][2]; // z

Related

stereo ping pong delay c++

I have to create a stereo ping pong delay with these parameters.
• Delay Time (0 – 3000 milliseconds)
• Feedback (0 – 0.99)
• Wet / Dry Mix (0 – 1.0)
I have managed to implement the stereo in/out and the 3 parameters, but struggling with how to implement the ping pong. I have this code in the process block, but it only replays the left and right in the opposite channels once. Is there a simple way to loop this to reply over and over and not just once or have is this not the best way to implement ping pong. Any help would be great!
//ping pong implementation
for (int i = 0; i < buffer.getNumSamples(); i++)
{
// Reduce the amplitude of each sample in the block for the
// left and right channels
//channelDataLeft[i] = channelDataLeft[i] * 0.5;
// channelDataRight[i] = channelDataRight[i] * 0.25;
if (i % 2 == 1) //if i is odd this will play
{
// Calculate the next output sample (current input sample + delayed version)
float outputSampleLeft = (channelDataLeft[i] + (mix * delayDataLeft[readIndex]));
float outputSampleRight = (channelDataRight[i] + (mix * delayDataRight[readIndex]));
// Write the current input into the delay buffer along with the delayed sample
delayDataLeft[writeIndex] = channelDataLeft[i] + (delayDataLeft[readIndex] * feedback);
delayDataRight[writeIndex] = channelDataRight[i] + (delayDataRight[readIndex] * feedback);
// Increment read and write index, check to see if it's greater than buffer length
// if yes, wrap back around to zero
if (++readIndex >= delayBufferLength)
readIndex = 0;
if (++writeIndex >= delayBufferLength)
writeIndex = 0;
// Assign output sample computed above to the output buffer
channelDataLeft[i] = outputSampleLeft;
channelDataRight[i] = outputSampleRight;
}
else //if i is even then this will play
{
// Calculate the next output sample (current input sample + delayed version swapped around from if)
float outputSampleLeft = (channelDataLeft[i] + (mix * delayDataRight[readIndex]));
float outputSampleRight = (channelDataRight[i] + (mix * delayDataLeft[readIndex]));
// Write the current input into the delay buffer along with the delayed sample
delayDataLeft[writeIndex] = channelDataLeft[i] + (delayDataLeft[readIndex] * feedback);
delayDataRight[writeIndex] = channelDataRight[i] + (delayDataRight[readIndex] * feedback);
// Increment read and write index, check to see if it's greater than buffer length
// if yes, wrap back around to zero
if (++readIndex >= delayBufferLength)
readIndex = 0;
if (++writeIndex >= delayBufferLength)
writeIndex = 0;
// Assign output sample computed above to the output buffer
channelDataLeft[i] = outputSampleLeft;
channelDataRight[i] = outputSampleRight;
}
}
Not really sure why you have the modulo one and different behavior based on sample index. A ping-pong delay should have two delay buffers, one for each channel. The input of one stereo channel plus the feedback of the opposite channel's delay buffer should be be fed into each delay.
Here is a good image of the audio signal graph of it:
Here is some pseudo-code of the logic:
float wetDryMix = 0.5f;
float wetFactor = wetDryMix;
float dryFactor = 1.0f - wetDryMix;
float feedback = 0.6f;
int sampleRate = 44100;
int sampleCount = sampleRate * 10;
float[] leftInSamples = new float[sampleCount];
float[] rightInSamples = new float[sampleCount];
float[] leftOutSamples = new float[sampleCount];
float[] rightOutSamples = new float[sampleCount];
int delayBufferSize = sampleRate * 3;
float[] delayBufferLeft = new float[delayBufferSize];
float[] delayBufferRight = new float[delayBufferSize];
int delaySamples = sampleRate / 2;
int delayReadIndex = 0;
int delayWriteIndex = delaySamples;
for(int sampleIndex = 0; sampleIndex < sampleCount; sampleIndex++) {
//Read samples in from input
leftChannel = leftInSamples[sampleIndex];
rightChannel = rightInSamples[sampleIndex];
//Make sure delay ring buffer indices are within range
delayReadIndex = delayReadIndex % delayBufferSize;
delayWriteIndex = delayWriteIndex % delayBufferSize;
//Get the current output of delay ring buffer
float delayOutLeft = delayBufferLeft[delayReadIndex];
float delayOutRight = delayBufferRight[delayReadIndex];
//Calculate what is put into delay buffer. It is the current input signal plus the delay output attenuated by the feedback factor
//Notice that the right delay output is fed into the left delay and vice versa
//In this version sound from each stereo channel will ping pong back and forth
float delayInputLeft = leftChannel + delayOutRight * feedback;
float delayInputRight = rightChannel + delayOutLeft * feedback;
//Alternatively you could use a mono signal that is pushed to one delay channel along with the current feedback delay
//This will ping-pong a mixed mono signal between channels
//float delayInputLeft = leftChannel + rightChannel + delayOutRight * feedback;
//float delayInputRight = delayOutLeft * feedback;
//Push the calculated delay value into the delay ring buffers
delayBufferLeft[delayWriteIndex] = delayInputLeft;
delayBufferRight[delayWriteIndex] = delayInputRight;
//Calculate resulting output by mixing the dry input signal with the current delayed output
float outputLeft = leftChannel * dryFactor + delayOutLeft * wetFactor;
float outputRight = rightChannel * dryFactor + delayOutRight * wetFactor;
leftOutSamples[sampleIndex] = outputLeft;
rightOutSamples[sampleIndex] = outputRight;
//Increment ring buffer indices
delayReadIndex++;
delayWriteIndex++;
}

OpenCV vs Matlab time of execution and optimizations

i'm going to implement the equalization proposed in a paper.
the method consists of substitute each value of each channel with the formula
in the 16th slide of this presentantion slides
First of all i've implemeted this equalization function in Matlab in two ways: in the first i compute the histograms (counts) of each channel in order to know
the number of values less then a specific value in the range [0 255]. Alternatively, in the second way i use some matrix operations (R<=value... G<=value .... V<=value).
Initially, i've thought that the second method was the best in terms of time to execution but it seems not.. and I was surprised for the first time.
Then i've implemented this function in OpenCV, and i'm now surprised because the execution in Matlab is faster then C++!! Using Matlab i have these timer values
Matlab, method 1: 1,36 seconds
Matlab, method 2: 1,74 seconds
In C++ using OpenCV i found these values:
OpenCV, method 1: 2380 milliseconds
OpenCV, method 2: 4651 milliseconds
I obtained the same results so the function is correct, but i think there is something wrong or something that could be enhanched in terms of computation time due to my inexperience in OpenCV because i expect that a compiled C++ function is faster than Matlab!!.... So my question is about how can i optimize the C++ code. In the following i put the C++ code using both methods
//I have an RGB image in the Mat 'image'
Mat channel[3];
// Splitting method 1
split(image, channel);
Mat Red, Green, Blue;
Blue = channel[0];
Green = channel[1];
Red = channel[2];
//Splitting method 2
// Separate the image in 3 places ( B, G and R )
// vector<Mat> bgr_planes;
// split(image, bgr_planes);
double maxB, maxG, maxR, Npx;
double min;
double coeffB, coeffG, coeffR;
Mat newB, newG, newR;
Mat mapB, mapG, mapR;
int P_Bi, P_Gi, P_Ri;
Mat rangeValues;
double intpart;
double TIME;
int histSize = 256;
/// Set the ranges ( for B,G,R) )
float range[] = { 0, 256 };
const float* histRange = { range };
bool uniform = true; bool accumulate = false;
Mat countB, countG, countR;
//Start the timer for the method 1
TIME = (double)getTickCount();
// Compute the histograms
calcHist(&Blue, 1, 0, Mat(), countB, 1, &histSize, &histRange, uniform, accumulate);
calcHist(&Green, 1, 0, Mat(), countG, 1, &histSize, &histRange, uniform, accumulate);
calcHist(&Red, 1, 0, Mat(), countR, 1, &histSize, &histRange, uniform, accumulate);
// Get the max from each channel
minMaxLoc(Blue, &min, &maxB);
minMaxLoc(Green, &min, &maxG);
minMaxLoc(Red, &min, &maxR);
//Number of pixels
Npx = Blue.rows * Blue.cols;
// Compute the coefficient of the formula
coeffB = maxB / Npx;
coeffG = maxG / Npx;
coeffR = maxR / Npx;
//Initialize the new channels
newB = Mat(Blue.rows, Blue.cols, Blue.type(), cvScalar(0));
newG = Mat(Green.rows, Green.cols, Green.type(), cvScalar(0));
newR = Mat(Red.rows, Red.cols, Red.type(), cvScalar(0));
//For each value of the range
for (int value = 0; value < 255; value++)
{
mapB = (Blue == value)/255;
mapG = (Green == value)/255;
mapR = (Red == value)/255;
//Number of pixels less or equal then 'value'
rangeValues = countB(Range(0, value+1), Range(0, 1));
P_Bi = cv::sum(rangeValues)[0];
rangeValues = countG(Range(0, value + 1), Range(0, 1));
P_Gi = cv::sum(rangeValues)[0];
rangeValues = countR(Range(0, value + 1), Range(0, 1));
P_Ri = cv::sum(rangeValues)[0];
//Substitution of the value in the new channel plane
modf((coeffB * P_Bi), &intpart);
newB = newB + mapB * intpart;
modf((coeffG * P_Gi), &intpart);
newG = newG + mapG * intpart;
modf((coeffR * P_Ri), &intpart);
newR = newR + mapR * intpart;
}
TIME = 1000 * ((double)getTickCount() - TIME) / getTickFrequency();
cout << "Method 1 - elapsed time: " << TIME << "milliseconds." << endl;
//Here it takes 2380 milliseconds
//....
//....
//....
//Start timer of method 2
TIME = 0;
TIME = (double)getTickCount();
//Get the max
minMaxLoc(Blue, &min, &maxB);
minMaxLoc(Green, &min, &maxG);
minMaxLoc(Red, &min, &maxR);
Npx = Blue.rows * Blue.cols;
coeffB = maxB / Npx;
coeffG = maxG / Npx;
coeffR = maxR / Npx;
newB = Mat(Blue.rows, Blue.cols, Blue.type(), cvScalar(0));
newG = Mat(Green.rows, Green.cols, Green.type(), cvScalar(0));
newR = Mat(Red.rows, Red.cols, Red.type(), cvScalar(0));
Mat mask = cvCreateImage(Blue.size(), IPL_DEPTH_8U, 1);
for (int value = 0; value < 255; value++)
{
mapB = (Blue == value) / 255;
mapG = (Green == value) / 255;
mapR = (Red == value) / 255;
//Instead, there i used matrices operations
mask = (Blue <= value)/255;
P_Bi = cv::sum(mask)[0];
mask = (Green <= value) / 255;
P_Gi = cv::sum(mask)[0];
mask = (Red <= value) / 255;
P_Ri = cv::sum(mask)[0];
modf((coeffB * P_Bi), &intpart);
newB = newB + mapB * intpart;
modf((coeffG * P_Gi), &intpart);
newG = newG + mapG * intpart;
modf((coeffR * P_Ri), &intpart);
newR = newR + mapR * intpart;
}
//End of the timer
TIME = 1000 * ((double)getTickCount() - TIME) / getTickFrequency();
cout << "Method 2 - elapsed time: " << TIME << "milliseconds." << endl;
//Here it takes 4651 milliseconds

KissFFT output returns nan value?

currently i m working on the Tizen IDE.
I had read the input data from the MicroPhone and Try to apply FFT on it...
but everytime I gets the nan output from the FFT.
Here is my code..
ShortBuffer *pBuffer1 = pData->AsShortBufferN();
fft = new KissFFT(BUFFER_SIZE);
std::vector<short> input(pBuffer1->GetPointer(),
pBuffer1->GetPointer() + BUFFER_SIZE); // this contains audio data
std::vector<float> specturm(BUFFER_SIZE);
fft->spectrum(input, specturm);
applying FFT
void KissFFT::spectrum(KissFFTO* fft, std::vector<short>& samples2,
std::vector<float>& spectrum) {
int len = fft->numSamples / 2 + 1;
kiss_fft_scalar* samples = (kiss_fft_scalar*) &samples2[0];
kiss_fftr(fft->config, samples, fft->spectrum);
for (int i = 0; i < len; i++) {
float re = scale(fft->spectrum[i].r) * fft->numSamples;
float im = scale(fft->spectrum[i].i) * fft->numSamples;
if (i > 0)
spectrum[i] = sqrtf(re * re + im * im) / (fft->numSamples / 2);
else
spectrum[i] = sqrtf(re * re + im * im) / (fft->numSamples);
AppLog("specturm %d",spectrum[i]); // everytime returns returns nan output
}
}
KissFFTO* KissFFT::create(int numSamples) {
KissFFTO* fft = new KissFFTO();
fft->config = kiss_fftr_alloc(numSamples/2, 0, NULL, NULL);
fft->spectrum = new kiss_fft_cpx[numSamples / 2 + 1];
fft->numSamples = numSamples;
return fft;
}
scaling
static inline float scale(kiss_fft_scalar val) {
if (val < 0)
return val * (1 / 32768.0f);
else
return val * (1 / 32767.0f);
}
AppLog("specturm %d",spectrum[i]); // everytime returns returns nan output
Try using %f rather than %d.

How do I get most accurate audio frequency data possible from real time FFT on Tizen?

currently i m working on the Tizen IDE.
I had read the input data from the microPhone and apply the FFT on it... but everytime i gets the nan output.
here is my code..
ShortBuffer *pBuffer1 = pData->AsShortBufferN();
fft = new KissFFT(BUFFER_SIZE);
std::vector<short> input(pBuffer1->GetPointer(),
pBuffer1->GetPointer() + BUFFER_SIZE); // this contains audio data
std::vector<float> specturm(BUFFER_SIZE);
fft->spectrum(input, specturm);
applying FFT..
void KissFFT::spectrum(KissFFTO* fft, std::vector<short>& samples2,
std::vector<float>& spectrum) {
int len = fft->numSamples / 2 + 1;
kiss_fft_scalar* samples = (kiss_fft_scalar*) &samples2[0];
kiss_fftr(fft->config, samples, fft->spectrum);
for (int i = 0; i < len; i++) {
float re = scale(fft->spectrum[i].r) * fft->numSamples;
float im = scale(fft->spectrum[i].i) * fft->numSamples;
if (i > 0)
spectrum[i] = sqrtf(re * re + im * im) / (fft->numSamples / 2);
else
spectrum[i] = sqrtf(re * re + im * im) / (fft->numSamples);
AppLog("specturm %d",spectrum[i]); // everytime returns returns nan output
}
}
KissFFTO* KissFFT::create(int numSamples) {
KissFFTO* fft = new KissFFTO();
fft->config = kiss_fftr_alloc(numSamples/2, 0, NULL, NULL);
fft->spectrum = new kiss_fft_cpx[numSamples / 2 + 1];
fft->numSamples = numSamples;
return fft;
}
In fft->config there should be some parameters about the size of FFT like 2048, 4096, i.e. powers of 2. If you increase this value, you can get more resolution in frequency.

For an Arduino Sketch based light meter, functions outside of 'loop' are not being set off/firing

I'm very new to Arduino. I have much more experience with Java and ActionScript 3. I'm working on building a light meter out of an Arduino Uno and a TAOS TSL235R light-to-frequency converter.
I can only find a tuturial using a different sensor, so I am working my way through converting what I need to get it all to work (AKA some copy and paste, shamefully, but I'm new to this).
There are three parts: this is the first tutorial of the series Arduino and the Taos TSL230R Light Sensor: Getting Started.
The photographic conversion: Arduino and the TSL230R: Photographic Conversions.
At first, I could return values for the frequency created by the TSL235R sensor, but once I tried to add the code for photographic conversions I only get zero returned, and none of the funcions outside of the main loop seem to fire being that my Serial.Println() doesn't return anything.
I am more concerned with making the functions fire than if my math is perfect. In ActionScript and Java there are event listeners for functions and such, do I need to declare the function for it to fire in C/C++?
Basically, how can I make sure all my functions fire in the C programming language?
My Arduino Sketch:
// TSL230R Pin Definitions
#define TSL_FREQ_PIN 2
// Our pulse counter for our interrupt
unsigned long pulse_cnt = 0;
// How often to calculate frequency
// 1000 ms = 1 second
#define READ_TM 1000
// Two variables used to track time
unsigned long cur_tm = millis();
unsigned long pre_tm = cur_tm;
// We'll need to access the amount of time passed
unsigned int tm_diff = 0;
unsigned long frequency;
unsigned long freq;
float lux;
float Bv;
float Sv;
// Set our frequency multiplier to a default of 1
// which maps to output frequency scaling of 100x.
int freq_mult = 100;
// We need to measure what to divide the frequency by:
// 1x sensitivity = 10,
// 10x sensitivity = 100,
// 100x sensitivity = 1000
int calc_sensitivity = 10;
void setup() {
attachInterrupt(0, add_pulse, RISING); // Attach interrupt to pin2.
pinMode(TSL_FREQ_PIN, INPUT); //Send output pin to Arduino
Serial.begin(9600); //Start the serial connection with the copmuter.
}//setup
void loop(){
// Check the value of the light sensor every READ_TM ms and
// calculate how much time has passed.
pre_tm = cur_tm;
cur_tm = millis();
if( cur_tm > pre_tm ) {
tm_diff += cur_tm - pre_tm;
}
else
if( cur_tm < pre_tm ) {
// Handle overflow and rollover (Arduino 011)
tm_diff += ( cur_tm + ( 34359737 - pre_tm ));
}
// If enough time has passed to do a new reading...
if (tm_diff >= READ_TM ) {
// Reset the ms counter
tm_diff = 0;
// Get our current frequency reading
frequency = get_tsl_freq();
// Calculate radiant energy
float uw_cm2 = calc_uwatt_cm2( frequency );
// Calculate illuminance
float lux = calc_lux_single( uw_cm2, 0.175 );
}
Serial.println(freq);
delay(1000);
} //Loop
unsigned long get_tsl_freq() {
// We have to scale out the frequency --
// Scaling on the TSL230R requires us to multiply by a factor
// to get actual frequency.
unsigned long freq = pulse_cnt * 100;
// Reset pulse counter
pulse_cnt = 0;
return(freq);
Serial.println("freq");
} //get_tsl_freq
void add_pulse() {
// Increase pulse count
pulse_cnt++;
return;
Serial.println("Pulse");
}//pulse
float calc_lux_single(float uw_cm2, float efficiency) {
// Calculate lux (lm/m^2), using standard formula
// Xv = Xl * V(l) * Km
// where Xl is W/m^2 (calculate actual received uW/cm^2, extrapolate from sensor size
// to whole cm size, then convert uW to W),
// V(l) = efficiency function (provided via argument) and
// Km = constant, lm/W # 555 nm = 683 (555 nm has efficiency function of nearly 1.0).
//
// Only a single wavelength is calculated - you'd better make sure that your
// source is of a single wavelength... Otherwise, you should be using
// calc_lux_gauss() for multiple wavelengths.
// Convert to w_m2
float w_m2 = (uw_cm2 / (float) 1000000) * (float) 100;
// Calculate lux
float lux = w_m2 * efficiency * (float) 683;
return(lux);
Serial.println("Get lux");
} //lux_single
float calc_uwatt_cm2(unsigned long freq) {
// Get uW observed - assume 640 nm wavelength.
// Note the divide-by factor of ten -
// maps to a sensitivity of 1x.
float uw_cm2 = (float) freq / (float) 10;
// Extrapolate into the entire cm2 area
uw_cm2 *= ( (float) 1 / (float) 0.0136 );
return(uw_cm2);
Serial.println("Get uw_cm2");
} //calc_uwatt
float calc_ev( float lux, int iso ) {
// Calculate EV using the APEX method:
//
// Ev = Av + Tv = Bv + Sv
//
// We'll use the right-hand side for this operation:
//
// Bv = log2( B/NK )
// Sv = log2( NSx )
float Sv = log( (float) 0.3 * (float) iso ) / log(2);
float Bv = log( lux / ( (float) 0.3 * (float) 14 ) ) / log(2);
return( Bv + Sv );
Serial.println("get Bv+Sv");
}
float calc_exp_tm ( float ev, float aperture ) {
// Ev = Av + Tv = Bv + Sv
// need to determine Tv value, so Ev - Av = Tv
// Av = log2(Aperture^2)
// Tv = log2( 1/T ) = log2(T) = 2^(Ev - Av)
float exp_tm = ev - ( log( pow(aperture, 2) ) / log(2) );
float exp_log = pow(2, exp_tm);
return( exp_log );
Serial.println("get exp_log");
}
unsigned int calc_exp_ms( float exp_tm ) {
unsigned int cur_exp_tm = 0;
// Calculate mS of exposure, given a divisor exposure time.
if (exp_tm >= 2 ) {
// Deal with times less than or equal to half a second
if (exp_tm >= (float) int(exp_tm) + (float) 0.5 ) {
// Round up
exp_tm = int(exp_tm) + 1;
}
else {
// Round down
exp_tm = int(exp_tm);
}
cur_exp_tm = 1000 / exp_tm;
}
else if( exp_tm >= 1 ) {
// Deal with times larger than 1/2 second
float disp_v = 1 / exp_tm;
// Get first significant digit
disp_v = int( disp_v * 10 );
cur_exp_tm = ( 1000 * disp_v ) / 10;
}
else {
// Times larger than 1 second
int disp_v = int( (float) 1 / exp_tm);
cur_exp_tm = 1000 * disp_v;
}
return(cur_exp_tm);
Serial.println("get cur_exp_tm");
}
float calc_exp_aperture( float ev, float exp_tm ) {
float exp_apt = ev - ( log( (float) 1 / exp_tm ) / log(2) );
float apt_log = pow(2, exp_apt);
return( apt_log );
Serial.println("get apt_log");
}
That is a lot of code to read, where should I start.
In your loop() you are assigning frequency but printing freq
// get our current frequency reading
frequency = get_tsl_freq();
-- snip --
Serial.println(freq);
in get_tsl_freq() you are creating a local unsigned int freq that hides the global freq and using that for calculation and returning the value, maybe that is also a source of confusion for you. I do not see a reason for frequency and freq to be globals in this code. The function also contains unreachable code, the control will leave the function on return, statements after the return will never be executed.
unsigned long get_tsl_freq() {
unsigned long freq = pulse_cnt * 100; <-- hides global variable freq
// re-set pulse counter
pulse_cnt = 0;
return(freq); <-- ( ) not needed
Serial.println("freq"); <-- Unreachable
}
Reading a bit more I can suggest you pick up a C++ book and read a bit. While your code compiles it is not technically valid C++, you get away with it thanks to the Arduino software that does some mangling and what not to allow using functions before they are declared.
On constants you use in your calculations
float w_m2 = (uw_cm2 / (float) 1000000) * (float) 100;
could be written as
float w_m2 = (uw_cm2 / 1000000.0f) * 100.0f;
or even like this because uw_cm2 is a float
float w_m2 = (uw_cm2 / 1000000) * 100;
You also seem to take both approaches to waiting, you have code that calculates and only runs if it has been more than 1000 msec since it was last run, but then you also delay(1000) in the same code, this may not work as expected at all.