Using CProgressCtrl::SetRange32 to increase a range - mfc

I have a situation where I start off with an initial range and then want to extend the range.
But if I call SetRange32 with the increased size the progress bar resets to 0 and then I have to set the position again.
I don't want it to reset to 0. If anything, I want it to dynamically re-adjust based on the the new range and retain the existing position.
Is this possible?
Calling SetRange and then SetPos to get back on track is a visually ugly solution.

I'd set a very large fixed size with CProgressCtrl::SetRange32 and then use CProgressCtrl::SetPos dealing yourself with a virtual size and a virtual position.
This is the idea:
You want:
SetRange32(100)
SetPosition(50) // position 50% (absolute position 50)
SetRange32(200) // position should decrease to 25% (absolute position still 50)
// (I suppose that's what you want)
SetPosition(60) // position 30 % (absolute position 60)
Works but is ugly.
Do this instead:
SetRange32(BIGRANGE);
SetPosition(BIGRANGE * (50 / 100)) position 50% (absolute position 50)
now we want another range NEWRANGE no SetRange32 needed
SetPosition((BIGRANGE / NEWRANGE) * (50 / 100))
SetPosition((BIGRANGE / NEWRANGE) * (60 / 100))
Of course your you need to take care of integer division or use floating point.
Update (from #ajtruckle)
Or, just leave the progress bar with the default range of 0 - 100 and work out the percentages accordingly. No need to change the range at all.

Related

How to set the frequency band of my array after fft

How I can set frequency band to my array from KissFFT? Sampling frequency is 44100 and I need to set it to my array realPartFFT. I have no idea, how it works. I need to plot my spectrum chart to see if it counts right. When I plot it now, it still has only 513 numbers on the x axis, without the specified frequency.
int windowCount = 1024;
float floatArray[windowCount], realPartFFT[(windowCount / 2) + 1];
kiss_fftr_cfg cfg = kiss_fftr_alloc(windowCount, 0, NULL, NULL);
kiss_fft_cpx cpx[(windowCount / 2) + 1];
kiss_fftr(cfg, floatArray, cpx);
for (int i = 0; i < (windowCount / 2) + 1; ++)
realPartFFT[i] = sqrtf(powf(cpx[i].r, 2.0) + powf(cpx[i].i, 2.0));
First of all: KissFFT doesn't know anything about the source of the data. You pass it an array of real numbers of a given size N, and you get in return an array of complex values of size N/2+1. The input array may be the whether forecast of the next N hours of the number of sunspots of the past N days. KissFFT doesn't care.
The mapping back to the real world needs to be done by you, so you have to interpret the data. As of you code snippet, you are passing 1024 of floats (I assume that floatArray contains the input data). You then get back an array of 513 (=1024/2+1) pairs of floats.
If you are sampling with 44.1 KHz and pass KissFFT chunks of 1024 (your window size) samples, you will get as highest frequency 22.05 KHz and as lowest frequency about 43 Hz (44,100 / 1024). You can get even lower by passing bigger chunks to KissFFT, but keep in mind that processing time will grow (with the fourth power of N, IIRC)!
Btw: You may consider making your windowSize variable const, to allow the compiler do some optimizations. Optimizations are very valuable when doing number crunching. In this case the effect may be negligible, but it's a good starting point.

Calculating the mean of the data

/***************************************************************************
Description : Calculates the trimmed mean of the data.
Comments : trim defaults to 0. Trim = 0.5 now gives the median.
***************************************************************************/
Real StatData::mean(Real trim) const
{
check_trim(trim);
if (size() < 1)
err << "StatData::mean: no data" << fatal_error;
Real result = 0;
const_cast<StatData&>(*this).items.sort();
int low = (int)(size()*trim); // starting at 0
int high = size() - low;
if (low == high) {
low--; high++;
}
for(int k = low; k < high; k++)
result += items[k];
ASSERT(2*low < size()); // Make sure we're not dividing by zero.
return result / (size() - 2*low);
}
I have three questions to ask:
1) Is *this referring to StatData?
2) Why is ASSERT(2*low < size()) checking for not dividing by zero?
3) The mean value usually means the total sum divided by the total size. but why are we doing size()-2*low?
Before we start, let's take a little bit of time to explain what the parameter trim is.
trim denotes how much fraction of data you want to cut off from both ends of the data before you want to compute what you need, assuming this is in sorted order. By doing trim = 0.5, you are cutting everything off except for considering the middle, which is the median. By doing trim = 0.1 for example, the first 10% and the last 10% of the data are discarded, and you only compute the mean within the remaining 80% of the data. Note that this is a normalized fraction between [0,1]. This fraction is then multiplied by size() to determine which index in your data we need to start from when computing the mean - denoted by low, and also which index to stop at - denoted by high. high is simply computed by size() - low, as the amount of data to cut off on both sides needs to be symmetric. This is actually sometimes called the alpha trimmed mean, or more commonly known as the truncated mean. The reason why it is also called alpha trimmed mean is because alpha defines how much of a fraction you want to cut off from the beginning and end of your sorted data. Equivalently in our case, alpha = trim.
Now onto your questions.
Question #1
The *this is referring to an instance of the current class which is of type StatData, and is ultimately trying to access items, which seems to be a container that contains some numbers of type Real. However, as Neil Kirk explained in his comment, and with what Hi I'm Dan has said, this is a very unsafe way of using const_cast so that you're able to access items so that you can sort these items. This is very bad.
Question #2
This is basically to ensure that when you're calculating the mean, you aren't dividing by zero. This condition will never be > 2*low because the size of your data will never get higher than this point. They check to see if size() < 2*low to ensure that you are going to divide the summation of your data by a number > 0, which is what we expect from the arithmetic mean. Should this condition fail, this means that computing the mean is not possible, and it should output an error.
Question #3
You are dividing by size() - 2*low because you are using trim to discard the proportion of data from the beginning and from the end of your data you don't need. This exactly corresponds to low on the one side and low on the other side. Take note that high computes where we need to stop accumulating at the upper end, and the proportion of data that exists after this point is low. As such, the combination of these proportions that are eliminated is 2*low, which is why you need to subtract this away from size() as you aren't using that data anymore.
The function is marked const, so the developer used a rather ugly const_cast to cast the const away in order to call sort.
ASSERT appears to be a macro (due to it being in capital letters) that most likely calls assert, which terminates the program if the expression evaluates to zero.
For a summary of what trimmed mean means, refer to this page.
The 10% trimmed mean is the mean computed by excluding the 10% largest
and 10% smallest values from the sample and taking the arithmetic mean
of the remaining 80% of the sample ...

Bin Packing algorithm - Practical Variation

I am trying to solve a weird bin packing problem. A link for the original problem is here
(sorry for the long question, thanks for your patience)
I am re-iterating the problem as follows:
I am trying to write an application that generates drawing for compartmentalized Panel.
I have N cubicles (2D rectangles) (N <= 40). For each cubicle there is a minimum height (minHeight[i]) and minimum width (minWidth[i]) associated. The panel itself also has a MAXIMUM_HEIGHT constraint.
These N cubicles have to be stacked one on top of the other in a column-wise grid such that the above constraints are met for each cubicle.
Also, the width of each column is decided by the maximum of minWidths of each cubicle in that column.
Also, the height of each column should be the same. This decides the height of the panel
We can add spare cubicles in the empty space left in any column or we can increase the height/width of any cubicle beyond the specified minimum. However we cannot rotate any of the cubicles.
OBJECTIVE: TO MINIMIZE TOTAL PANEL WIDTH.
MAXIMUM_HEIGHT of panel = 2100mm, minwidth range (350mm to 800mm), minheight range (225mm to 2100mm)
As per the answer chosen, I formulated the Integer Linear Program. However, given the combinatorial nature of the problem, the solver appears to 'hang' on N > 20.
I am now trying to implement a work-around solution.
The cubicles are sorted in descending order of minWidths. If the minWidths are equal, then they are sorted in descending order of their minHeights.
I then solve it using the First Fit decreasing heuristic. This gives me an upper bound on the total panel width, and a list of present column widths.
Now I try to make the panel width smaller and try to fit my feeders in that smaller sized panel. (I am able to check whether the feeders fit in a given list of column widths in an efficient manner)
The panel width can be made smaller in the following ways:
1. Take any column, replace it with a column of next lower minWidth feeder. If the column is already of the lowest minWidth, then try to remove it and check.
2. Take any column, replace it with a column of a higher minWidth feeder and remove another column.
3. Any other way, I don't know shall be glad if anyone can point out.
I have implemented the 1st way correctly. Following is the code. However, I am not able put the other way in code correctly.
for ( int i = 0; i < columnVector.size(); i++ ) {
QVector< Notepad::MyColumns > newVec( columnVector );
if ( newVec[i].quantity > 0
&& ( i > 0 || newVec[i].quantity > 1 ) ) {
newVec[i].quantity--;
if ( i < columnVector.size() - 1 )
newVec[i+1].quantity++;
float fitResult = tryToFit( newVec, feederVector );
myPanelWidth = fitResult ? fitResult : myPanelWidth;
if ( fitResult ) { // if feeders fit, then start the iteration again.
columnVector = newVec;
i = -1;
}
}
}
Any help shall be greatly appreciated.
Thanks
try this https://stackoverflow.com/a/21282418/2521214
and swap x,y axises
because that solution minimize page height (fixed page width)
if you do not want border then set it to zero
it is basically what you are coding now

Coming up with simple formula for linked list

I've been at this for days honestly. I've already implememnted the hard part of this function, but now theres just one small thing. The method I want to write is to remove every Nth block of blockSize of a linked list. So if I have a linked list of size 7 {1,2,3,4,5,6,7}, N=2, blockSize=2, I want to remove every Nth(2nd) block of size blockSize(2), so remove 3,4,7. Now in order for my for loops to work, I need to write an expression for an int value I created called numBlocksRemoved. It calculates the total number of blocks to be removed. In this case, it would be 2. Here's what I have:
numBlockRemoved=(size/blockSize)/N;
However, this only works sometimes, when the numbers are looking good. If I have size=8,N=2, blockSize=2, then I get numBlockRemoved=2, which is correct. However, for the above example, I get in int value of 1, which is incorrect. I want 2. I've thought about this for soooo long its ridiculous. I just cant come up with a formula that works for numBlockRemoved. Any ideas?
Try
floor(ceil(size/blockSize)/N)
floor(ceil(7/2)/3) = 1
floor(ceil(7/2)/2) = 2
floor(ceil(8/2)/2) = 2
The number of blocks that you have:
blocks = ceil(size/blockSize)
ceil because you don't mind for not-full blocks.
then you skip every N, so:
floor(blocks/N)
floor because you either count a block or you don't.
Rounding should be upward when computing the number of blocks as an incomplete block is still a block (but not when computing the number of removed blocks):
numBlockRemoved=((size+blockSize-1)/blockSize)/N;
(size + (blockSize - 1)) / (blockSize * N)
Just think about it systematically - if you take every Nth block of size blockSize, you are effectively removing "superblocks" of size (N * blockSize). So to a first approximation, you have
nBlocks = floor [size / (N * blockSize)]
Now, from your example, even if you don't get a complete block at the end, you still want to remove it. This happens if the remainder after removing the last complete block is more than (N-1) complete blocks. So the algorithm is
superblock = N * blockSize
nBlocks = floor (size / superblock)
remainder = size - (nBlocks * superblock)
if remainder > (N - 1) * blockSize {
nBlocks += 1
}
You can collapse the +1 adjustment at the end into the formula by adding an amount that will tip the size over a complete superblock iff it is less than one block away (analogous to rounding by adding .5 and then taking the floor). Since this happens if we are even one number into the last block of the superblock, we have to add (blockSize - 1), which gives us
(size + (blockSize - 1)) / (blockSize * N)
Which is aaz's formula above. So you can go ahead and mark his answer as accepted; I just wanted to explain how he arrived at that formula.

randomness algorithm

I need some help regarding algorithm for randomness. So Problem is.
There are 50 events going to happen in 8 hours duration. Events can happen at random times.
Now it means in each second there is a chance of event happening is 50/(8*60*60)= .001736.
How can I do this with random generation algorithm?
I can get random number
int r = rand();
double chance = r/RAND_MAX;
if(chance < 0.001736)
then event happens
else
no event
But most of times rand() returns 0 and 0<0.001736 and I am getting more events than required.
Any suggestions?
sorry I forget to mention
I calculated chance as
double chance = (static_cast )(r) / (static_cast)(RAND_MAX);
It removed double from static_cast
double chance = (double)r/(double)(RAND_MAX);
Both r and RAND_MAX are integers, so the expression
double chance = r / RAND_MAX;
is computed with integer arithmetic. Try:
double chance = 1.0 * r / RAND_MAX;
which will cause the division to be a floating point division.
However, a better solution would be to use a random function that returns a floating point value in the first place. If you use an integer random number generator, you will get some bias errors in your probability calculations.
If you choose whether an event will happen in each second, you have a change of 0 events occurring or 8*60*60 events occurring. If 50 events is a constraint, choose 50 random times during the 8 hour period and store them off.
Create a list of 50 numbers.
Fill them with a random number between 1 and 8 * 60 * 60.
Sort them
And you have the 50 seconds.
Note that you can have duplicates.
Exactly 50, or on average 50?
You might want to look into the Exponential distribution and find a library for your language that supports it.
The Exponential distribution will give you the intervals between events that occur randomly at a specified average rate.
You can "fake" it with a uniform RNG as follows:
double u;
do
{
// Get a uniformally-distributed random double between
// zero (inclusive) and 1 (exclusive)
u = rng.nextDouble();
} while (u == 0d); // Reject zero, u must be +ve for this to work.
return (-Math.log(u)) / rate;
Why not create a 28,800 element list and pull 50 elements from it to determine the time of the events? This does assume that 2 events can't occur at the same time and each event takes 1 second of time. You can use the random number generator to generate integer values between 0 and x so that it is possible to pick within the limits.