How can I smooth the rate of change of a number data? - c++

I'm a collage student and I'm new here. I've been having a problem with my assignment which I have to calculate the speed of a car. I've done with the right algorithm (using a 1/60s timer) and formula except that I have a problem with displaying the speed number. The output data changes very fast in 1 seconds (Yes, it will change very frequently in 1s since I use a 1/60s timer). Is there any way to smooth the rate of change within that output?
I've tried to round the number but the rate of change still very quick.
//For example Car1 object is moving along the x axis
//My method to calculate the speed with a 1/60s timer
//every 1/60s timeout:
if(distanceToggler == true ){
vDistance[0] = car->getCarPos().x();
}
else {
vDistance[1] = car->getCarPos().x();
}
//if Ture assign to vDistance[0] else assign to vDistance[1]
distanceToggler = !distanceToggler;
if ( (vDistance[1] - vDistance[0]) >= 0 ){
defaultSetting.editCurrentCarSpeed( (vDistance[1]-vDistance[0]) / (0.6f) );
}
currentCarSpeed = (vDistance[0]-vDistance[1]) / (0.6f);

A simple way to smooth noisy values arriving frequently is to keep a kind of running average and only adjust it by a percentage of each new value:
const float smooth_factor = 0.05f;
// Assume the first sample is correct (alternatively you could initialize to 0)
float smooth_v;
std::cin >> smooth_v;
// Read samples and output filtered samples
for(float v; std::cin >> v; )
{
smooth_v = (1.0f - smooth_factor) * smooth_v + smooth_factor * v;
std::cout << smooth_v << std::endl;
}
The smaller you make smooth_factor, the slower the "smooth" value will change in response to new data. You can tweak this value to something suitable to your application.
This is a fast alternative to taking an unweighted windowed average (although such averages can be computed in constant time), although it's slightly different in that every historical value has some effect (which reduces with time).

Related

Fast, good quality pixel interpolation for extreme image downscaling

In my program, I am downscaling an image of 500px or larger to an extreme level of approx 16px-32px. The source image is user-specified so I do not have control over its size. As you can imagine, few pixel interpolations hold up and inevitably the result is heavily aliased.
I've tried bilinear, bicubic and square average sampling. The square average sampling actually provides the most decent results but the smaller it gets, the larger the sampling radius has to be. As a result, it gets quite slow - slower than the other interpolation methods.
I have also tried an adaptive square average sampling so that the smaller it gets the greater the sampling radius, while the closer it is to its original size, the smaller the sampling radius. However, it produces problems and I am not convinced this is the best approach.
So the question is: What is the recommended type of pixel interpolation that is fast and works well on such extreme levels of downscaling?
I do not wish to use a library so I will need something that I can code by hand and isn't too complex. I am working in C++ with VS 2012.
Here's some example code I've tried as requested (hopefully without errors from my pseudo-code cut and paste). This performs a 7x7 average downscale and although it's a better result than bilinear or bicubic interpolation, it also takes quite a hit:
// Sizing control
ctl(0): "Resize",Range=(0,800),Val=100
// Variables
float fracx,fracy;
int Xnew,Ynew,p,q,Calc;
int x,y,p1,q1,i,j;
//New image dimensions
Xnew=image->width*ctl(0)/100;
Ynew=image->height*ctl(0)/100;
for (y=0; y<image->height; y++){ // rows
for (x=0; x<image->width; x++){ // columns
p1=(int)x*image->width/Xnew;
q1=(int)y*image->height/Ynew;
for (z=0; z<3; z++){ // channels
for (i=-3;i<=3;i++) {
for (j=-3;j<=3;j++) {
Calc += (int)(src(p1-i,q1-j,z));
} //j
} //i
Calc /= 49;
pset(x, y, z, Calc);
} // channels
} // columns
} // rows
Thanks!
The first point is to use pointers to your data. Never use indexes at every pixel. When you write: src(p1-i,q1-j,z) or pset(x, y, z, Calc) how much computation is being made? Use pointers to data and manipulate those.
Second: your algorithm is wrong. You don't want an average filter, but you want to make a grid on your source image and for every grid cell compute the average and put it in the corresponding pixel of the output image.
The specific solution should be tailored to your data representation, but it could be something like this:
std::vector<uint32_t> accum(Xnew);
std::vector<uint32_t> count(Xnew);
uint32_t *paccum, *pcount;
uint8_t* pin = /*pointer to input data*/;
uint8_t* pout = /*pointer to output data*/;
for (int dr = 0, sr = 0, w = image->width, h = image->height; sr < h; ++dr) {
memset(paccum = accum.data(), 0, Xnew*4);
memset(pcount = count.data(), 0, Xnew*4);
while (sr * Ynew / h == dr) {
paccum = accum.data();
pcount = count.data();
for (int dc = 0, sc = 0; sc < w; ++sc) {
*paccum += *i;
*pcount += 1;
++pin;
if (sc * Xnew / w > dc) {
++dc;
++paccum;
++pcount;
}
}
sr++;
}
std::transform(begin(accum), end(accum), begin(count), pout, std::divides<uint32_t>());
pout += Xnew;
}
This was written using my own library (still in development) and it seems to work, but later I changed the variables names in order to make it simpler here, so I don't guarantee anything!
The idea is to have a local buffer of 32 bit ints which can hold the partial sum of all pixels in the rows which fall in a row of the output image. Then you divide by the cell count and save the output to the final image.
The first thing you should do is to set up a performance evaluation system to measure how much any change impacts on the performance.
As said precedently, you should not use indexes but pointers for (probably) a substantial
speed up & not simply average as a basic averaging of pixels is basically a blur filter.
I would highly advise you to rework your code to be using "kernels". This is the matrix representing the ratio of each pixel used. That way, you will be able to test different strategies and optimize quality.
Example of kernels:
https://en.wikipedia.org/wiki/Kernel_(image_processing)
Upsampling/downsampling kernel:
http://www.johncostella.com/magic/
Note, from the code it seems you apply a 3x3 kernel but initially done on a 7x7 kernel. The equivalent 3x3 kernel as posted would be:
[1 1 1]
[1 1 1] * 1/9
[1 1 1]

drawing waveform - converting to DB squashes it

I have a wave file, i have a function that retrieves 2 samples per pixel then i draw lines with them. quick and painless before i deal with zooming. i can display the amplitude values no problem
that is an accurate image of the waveform. to do this i used the following code
//tempAllChannels[numOfSamples] holds amplitude data for the entire wav
//oneChannel[numOfPixels*2] will hold 2 values per pixel in display area, an average of min amp, and average of max
for(int i = 0; i < numOfSamples; i++)//loop through all samples in wave file
{
if (tempAllChannels[i] < 0) min += tempAllChannels[i];//if neg amp value, add amp value to min
if (tempAllChannels[i] >= 0) max += tempAllChannels[i];
if(i%factor==0 && i!=0) //factor is (numofsamples in wav)/(numofpixels) in display area
{
min = min/factor; //get average amp value
max = max/factor;
oneChannel[j]=max;
oneChannel[j+1]=min;
j+=2; //iterate for next time
min = 0; //reset for next time
max = 0;
}
}
and that's great but I need to display in db so quieter wave images arent ridiculously small, but when i make the following change to the above code
oneChannel[j]=10*log10(max);
oneChannel[j+1]=-10*log10(-min);
the wave image looks like this.
which isnt accurate, it looks like its being squashed. Is there something wrong with what I'm doing? I need to find a way to convert from amplitude to decibels whilst maintaining dynamics. im thinking i shouldnt be taking an average when converted to DB.
Don't convert to dB for overviews. No one does that.
Instead of finding the average over a block, you should find the max of the absolute value. By averaging, you will loose a lot of amplitude in your high frequency peaks.

Linear size increase ends up increasing by 1 more sometimes

I have a flow layout. Inside it I have about 900 tables. Each table is stacked one on top of the other. I have a slider which resizes them and thus causes the flow layout to resize too.
The problem is, the tables should be linearly resizing. Their base size is 200x200. So when scale = 1.0, the w and h of the tables is 200.
Here is an example of the problem:
My issue is when delta is 8 instead of 9. What could I do to make sure my increases are always linear?
void LobbyTableManagaer::changeTableScale( double scale )
{
setTableScale(scale);
}
void LobbyTableManager::setTableScale( double scale )
{
scale += 0.3;
scale *= 2.0;
float scrollRel = m_vScroll->getRelativeValue();
setScale(scale);
rescaleTables();
resizeFlow();
...
double LobbyTableManager::getTableScale() const
{
return (getInnerWidth() / 700.0) * getScale();
}
void LobbyFilterManager::valueChanged( agui::Slider* source,int val )
{
if(source == m_magnifySlider)
{
DISPATCH_LOBBY_EVENT
{
(*it)->changeTableScale((double)val / source->getRange());
}
}
}
In short, I would like to ensure that the tables always increase by a linear amount. I cant understand why every few times delta is 8 rather than 9.
Thanks
Look at your "200 X Table Scale" values, they are going up by about 8.8. So when it is rounded to an integer, it will be 9 more than the previous value about 80% of the time and 8 more the other 20% of the time.
If you really need the increases to be the same size every time, you have to do everything with integers. Otherwise, you have to adjust your scale changes so the result is closer to 9.0.

C++: Calculating Moving FPS

I would like to calculate the FPS of the last 2-4 seconds of a game. What would be the best way to do this?
Thanks.
Edit: To be more specific, I only have access to a timer with one second increments.
Near miss of a very recent posting. See my response there on using exponential weighted moving averages.
C++: Counting total frames in a game
Here's sample code.
Initially:
avgFps = 1.0; // Initial value should be an estimate, but doesn't matter much.
Every second (assuming the total number of frames in the last second is in framesThisSecond):
// Choose alpha depending on how fast or slow you want old averages to decay.
// 0.9 is usually a good choice.
avgFps = alpha * avgFps + (1.0 - alpha) * framesThisSecond;
Here's a solution that might work for you. I'll write this in pseudo/C, but you can adapt the idea to your game engine.
const int trackedTime = 3000; // 3 seconds
int frameStartTime; // in milliseconds
int queueAggregate = 0;
queue<int> frameLengths;
void onFrameStart()
{
frameStartTime = getCurrentTime();
}
void onFrameEnd()
{
int frameLength = getCurrentTime() - frameStartTime;
frameLengths.enqueue(frameLength);
queueAggregate += frameLength;
while (queueAggregate > trackedTime)
{
int oldFrame = frameLengths.dequeue();
queueAggregate -= oldFrame;
}
setAverageFps(frameLength.count() / 3); // 3 seconds
}
Could keep a circular buffer of the frame time for the last 100 frames, and average them? That'll be "FPS for the last 100 frames". (Or, rather, 99, since you won't diff the newest time and the oldest.)
Call some accurate system time, milliseconds or better.
What you actually want is something like this (in your mainLoop):
frames++;
if(time<secondsTimer()){
time = secondsTimer();
printf("Average FPS from the last 2 seconds: %d",(frames+lastFrames)/2);
lastFrames = frames;
frames = 0;
}
If you know, how to deal with structures/arrays it should be easy for you to extend this example to i.e. 4 seconds instead of 2. But if you want more detailed help, you should really mention WHY you haven't access to an precise timer (which architecture, language) - otherwise everything is like guessing...

Help with code optimization

I've written a little particle system for my 2d-application. Here is raining code:
// HPP -----------------------------------
struct Data
{
float x, y, x_speed, y_speed;
int timeout;
Data();
};
std::vector<Data> mData;
bool mFirstTime;
void processDrops(float windPower, int i);
// CPP -----------------------------------
Data::Data()
: x(rand()%ScreenResolutionX), y(0)
, x_speed(0), y_speed(0), timeout(rand()%130)
{ }
void Rain::processDrops(float windPower, int i)
{
int posX = rand() % mWindowWidth;
mData[i].x = posX;
mData[i].x_speed = WindPower*0.1; // WindPower is float
mData[i].y_speed = Gravity*0.1; // Gravity is 9.8 * 19.2
// If that is first time, process drops randomly with window height
if (mFirstTime)
{
mData[i].timeout = 0;
mData[i].y = rand() % mWindowHeight;
}
else
{
mData[i].timeout = rand() % 130;
mData[i].y = 0;
}
}
void update(float windPower, float elapsed)
{
// If this is first time - create array with new Data structure objects
if (mFirstTime)
{
for (int i=0; i < mMaxObjects; ++i)
{
mData.push_back(Data());
processDrops(windPower, i);
}
mFirstTime = false;
}
for (int i=0; i < mMaxObjects; i++)
{
// Sleep until uptime > 0 (To make drops fall with randomly timeout)
if (mData[i].timeout > 0)
{
mData[i].timeout--;
}
else
{
// Find new x/y positions
mData[i].x += mData[i].x_speed * elapsed;
mData[i].y += mData[i].y_speed * elapsed;
// Find new speeds
mData[i].x_speed += windPower * elapsed;
mData[i].y_speed += Gravity * elapsed;
// Drawing here ...
// If drop has been falled out of the screen
if (mData[i].y > mWindowHeight) processDrops(windPower, i);
}
}
}
So the main idea is: I have some structure which consist of drop position, speed. I have a function for processing drops at some index in the vector-array. Now if that's first time of running I'm making array with max size and process it in cycle.
But this code works slower that all another I have. Please, help me to optimize it.
I tried to replace all int with uint16_t but I think it doesn't matter.
Replacing int with uint16_t shouldn't do any difference (it'll take less memory, but shouldn't affect running time on most machines).
The shown code already seems pretty fast (it's doing only what it's needed to do, and there are no particular mistakes), I don't see how you could optimize it further (at most you could remove the check on mFirstTime, but that should make no difference).
If it's slow it's because of something else. Maybe you've got too many drops, or the rest of your code is so slow that update gets called little times per second.
I'd suggest you to profile your program and see where most time is spent.
EDIT:
one thing that could speed up such algorithm, especially if your system hasn't got an FPU (! That's not the case of a personal computer...), would be to replace your floating point values with integers.
Just multiply the elapsed variable (and your constants, like those 0.1) by 1000 so that they will represent milliseconds, and use only integers everywhere.
Few points:
Physics is incorrect: wind power should be changed as speed makes closed to wind speed, also for simplicity I would assume that initial value of x_speed is the speed of the wind.
You don't take care the fraction with the wind at all, so drops getting faster and faster. but that depends on your want to model.
I would simply assume that drop fails in constant speed in constant direction because this is really what happens very fast.
Also you can optimize all this very simply as you don't need to solve motion equation using integration as it can be solved quite simply directly as:
x(t):= x_0 + wind_speed * t
y(t):= y_0 - fall_speed * t
This is the case of stable fall when the gravity force is equal to friction.
x(t):= x_0 + wind_speed * t;
y(t):= y_0 - 0.5 * g * t^2;
If you want to model drops that fall faster and faster.
Few things to consider:
In your processDrops function, you pass in windPower but use some sort of class member or global called WindPower, is that a typo? If the value of Gravity does not change, then save the calculation (i.e. mult by 0.1) and use that directly.
In your update function, rather than calculating windPower * elapsed and Gravity * elapsed for every iteration, calculate and save that before the loop, then add. Also, re-organise the loop, there is no need to do the speed calculation and render if the drop is out of the screen, do the check first, and if the drop is still in the screen, then update the speed and render!
Interestingly, you never check to see if the drop is out of the screen interms of it's x co-ordinate, you check the height, but not the width, you could save yourself some calculations and rendering time if you did this check as well!
In loop introduce reference Data& current = mData[i] and use it instead of mData[i]. And use this reference instead of index also in procesDrops.
BTW I think that consulting mFirstTime in processDrops serves no purpose because it will never be true. Hmm, I missed processDrops in initialization loop. Never mind this.
This looks pretty fast to me already.
You could get some tiny speedup by removing the "firsttime" code and putting it in it's own functions to call once rather that testing every calls.
You are doing the same calculation on lots of similar data so maybe you could look into using SSE intrinsics to process several items at once. You'l likely have to rearrange your data structure for that though to be a structure of vectors rather than a vector od structures like now. I doubt it would help too much though. How many items are in your vector anyway?
It looks like maybe all your time goes into ... Drawing Here.
It's easy enough to find out for sure where the time is going.