Calculating Minimum and Maximum temperature of past 60 sec [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
This is an algorithm question where I need some idea for my project.
I have one function which is measuring temperature at every 100 ms. My question is, when user asks, I want to calculate the minimum and maximum temperature of last 60 secs.
Note: I don't store measured temperature in any array due to memory restriction.

In most cases temperature does not change very fast, so sampling at 10sps is probably excessive, however if the sampling is noisy, such oversampling is useful to get stable readings at higher resolution.
Given the typical dynamics of temperature change you can instead take the sum of samples over a short period N, and every N seconds, add that sum to a moving-window buffer and reset the N accumulator.
The min/max value over the window period can then be found in the buffer without having to retain all the samples over that period
For example given the following function:
#include <limits.h>
void temperatureWindowedMinMax( int& min, int& max )
{
static const int TEMPERATURE_SENSOR_PIN = A0 ; // Analogue pin for sensor
static const long SAMPLE_PERIOD_MS = 100 ; // 10sps
static const int SAMPLE_SUM_N = 20 ; // Accumulate 2 seconds of samples
static const int MINMAX_BUFFER_LENGTH = 30 ; // 60 second buffer
static int sample_sum = 0 ;
static int sample_sum_count = 0 ;
static int minmax_buffer[MINMAX_BUFFER_LENGTH] = {0} ;
static int minmax_buffer_index = 0 ;
long now = millis() ;
static long last_sample_time = now - SAMPLE_PERIOD_MS ; // force sample on first call
// Get new sample if last sample is SAMPLE_PERIOD_MS old or more
int sample = 0 ;
if( now - last_sample_time > SAMPLE_PERIOD_MS )
{
sample = analogRead( TEMPERATURE_SENSOR_PIN ) ;
// Accumulate samples
sample_sum += sample ;
sample_sum_count++ ;
// When SAMPLE_SUM_N samples have been accumulated...
if( sample_sum_count >= SAMPLE_SUM_N )
{
// Add sample sum to min-max buffer
minmax_buffer[minmax_buffer_index] = sample_sum ;
minmax_buffer_index++ ;
if( minmax_buffer_index >= MINMAX_BUFFER_LENGTH )
{
minmax_buffer_index = 0 ;
}
// Reset mean accumulator
sample_sum_count = 0 ;
sample_sum = 0 ;
}
}
// Find min and max in min-max buffer
min = INT_MAX ;
max = INT_MIN ;
for( int i = 0; i < MINMAX_BUFFER_LENGTH; i++ )
{
if( minmax_buffer[i] > max ) max = minmax_buffer[i] ;
if( minmax_buffer[i] < min ) min = minmax_buffer[i] ;
// NOTE: Convert min/max to temperature here if necessary
// bearing in mind that the sum is 20 x larger
// then the raw analogue sample
}
}
you might have a Sketch loop() like:
void loop()
{
int tmin, tmax ;
temperatureWindowedMinMax( tmin, tmax ) ;
// tmin tmax have min and max temperature over last 60 seconds
}
Note that temperatureWindowedMinMax() handles its own sample timing and need only be called rapidly in the loop. It would be possible to separate the min/max recovery from the sampling by creating two functions and changing the min-max buffer scope:
#include <limits.h>
static const int MINMAX_BUFFER_LENGTH = 30 ;
static int minmax_buffer[MINMAX_BUFFER_LENGTH] = {0} ;
void updateTemperatureMinMax()
{
static const int TEMPERATURE_SENSOR_PIN = A0 ; // Analogue pin for sensor
static const long SAMPLE_PERIOD_MS = 100 ; // 10sps
static const int SAMPLE_SUM_N = 20 ; // Accumulate 2 seconds of samples
static int sample_sum = 0 ;
static int sample_sum_count = 0 ;
static int minmax_buffer_index = 0 ;
long now = millis() ;
static long last_sample_time = now - SAMPLE_PERIOD_MS ; // force sample on first call
// Get new sample if last sample is SAMPLE_PERIOD_MS old or more
int sample = 0 ;
if( now - last_sample_time > SAMPLE_PERIOD_MS )
{
sample = analogRead( TEMPERATURE_SENSOR_PIN ) ;
// Accumulate samples
sample_sum += sample ;
sample_sum_count++ ;
// When SAMPLE_SUM_N samples have been accumulated...
if( sample_sum_count >= SAMPLE_SUM_N )
{
// Add sample sum to min-max buffer
// NOTE: Convert sample_sum to temperature here if necessary
// bearing in mind that the sum is 20 x larger
// then the analogue sample
minmax_buffer[minmax_buffer_index] = sample_sum ;
minmax_buffer_index++ ;
if( minmax_buffer_index >= MINMAX_BUFFER_LENGTH )
{
minmax_buffer_index = 0 ;
}
// Reset mean accumulator
sample_sum_count = 0 ;
sample_sum = 0 ;
}
}
}
void getTemperatureWindowedMinMax( int& min, int& max )
{
// Find min and max in min-max buffer
min = INT_MAX ;
max = INT_MIN ;
for( int i = 0; i < MINMAX_BUFFER_LENGTH; i++ )
{
if( minmax_buffer[i] > max ) max = minmax_buffer[i] ;
if( minmax_buffer[i] < min ) min = minmax_buffer[i] ;
// NOTE: Convert min/max to temperature here if necessary
// bearing in mind that the sum is 20 x larger
// then the raw analogue sample
}
}
Then update and the usage can be independent:
void loop()
{
updateTemperatureMinMax() ;
// Every second, do something with min max
if( millis() % 1000 == 0 )
{
int tmin, tmax ;
getTemperatureWindowedMinMax( min, max ) ;
// tmin tmax have min and max temperature over last 60 seconds
}
}
Either way you get control over the memory usage via the constants:
static const int TEMPERATURE_SENSOR_PIN = A0 ; // Analogue pin for sensor
static const long SAMPLE_PERIOD_MS = 100 ; // 10sps
static const int SAMPLE_SUM_N = 20 ; // Accumulate 2 seconds of samples
static const int MINMAX_BUFFER_LENGTH = 30 ; // 60 second buffer
For example taking a longer average allows a shorter buffer while being less responsive to rapid fluctuations. A shorter average but a longer buffer will capture short temperature spikes at the expense of more memory. In practice, temperature tends to be a low-bandwidth, laggy signal, so the above settings are probably more than adequate in most cases.
Note that in both solutions, the conversion from analogue sample sum to temperature need not be carried out on the stored samples of sample sum, but can be performed only on the final min/max result. Doing do is both efficient and avoids data loss through rounding. Note also that the sum of samples is accumulated, not the average it is both more efficient and avoids data loss.

Related

Split number into sum of preselected other numbers

I have a number (for example 301, but can be even 10^11).
n = lenght of that number
I have to break it down to sum of max n components. Those components are 0^n, 1^n, 2^n, 3^n...9^n.
How can I do that?
Since you have 1^n included in your options, this becomes a really simple problem solvable through Greedy Approach.
Firstly, let me clarify that the way I understand this, for an input N of length n, you want some solution to this equation:
A.1^n + B.2^n + C.3^n + ... + H.8^n + I.9^n
There are infinitely many possible solutions (just by theory of equations). One possible solution can be found as follows:
a = [x ** n for x in range(0,10)]
consts = [0] * 10
ctr = 9
while N > 0:
consts[ctr] = N // a[ctr]
N = N % a[ctr]
ctr -= 1
return consts
This consts array will have the constant values for the above equation at respective indices.
PS: I've written this in python but you can translate it to C++ as you want. I saw that tag later. If you have any confusion regarding the code, feel free to ask in the comments.
You could use the following to determine the number of components.
int remain = 301; // Target number
int exp = 3; // Length of number (exponent)
int total = 0; // Number of components
bool first = true; // Used to determinie if plus sign is output
for ( int comp = 9; comp > 0; --comp )
{
int count = 0; // Number of times this component is needed
while ( pow(comp, exp) <= remain )
{
++total; // Count up total number of components
++count; // Count up number of times this component is used
remain -= int(pow(comp, exp));
}
if ( count ) // If count is not zero, component is used
{
if ( first )
{
first = false;
}
else
{
printf(" + ");
}
if ( count > 1 )
{
printf("%d(%d^%d)", count, comp, exp);
}
else
{
printf("%d^%d", comp, exp);
}
}
}
if ( total == exp )
{
printf("\r\nTarget number has %d components", exp);
}
else if ( total < exp )
{
printf("\r\nTarget number has less than %d components", exp);
}
else
{
printf("\r\nTarget number has more than %d components", exp);
}
Output for 301:
6^3 + 4^3 + 2(2^3) + 5(1^3)
Target number has more than 3 components
Output for 251:
6^3 + 3^3 + 2^3
Target number has 3 components

Adjust the Scatter Gather List buffer Address and lengths

I have a original allocated SGL array of structures that contain buffer address and lengths. We need to prepare a temporary SGL array based off original SGL structure array meeting few requirements and later use the temporary SGL array for crypto operations.
Requirement: Ignore the first 8 bytes and last 10 bytes
// Buffer data structure
typedef struct
{
void *data_addr;
int data_len;
}T_BUF_DATA;
Final Array:
T_BUF_DATA final_array[100];
Case1:
Original array: T_BUF_DATA[0] = buf_addr = 0xabc, buf_len = 1
T_BUF_DATA[1] = buf_addr = 0xdef, buf_len = 10
T_BUF_DATA[2] = buf_addr = 0x123, buf_len = 23
final_array[0] = buf_addr =( 0xdef + 7 ), buf_len = ( 10 - 7 ) // since we need to ignore the first 8 bytes from original list, adjust the buf_Addr by offset of 7 bytes
final_array[1] = buf_addr = 0x123, buf_len = ( 23 - 10 ) .. since we need to ignore the last 10 bytes
Case2:
Original array: T_BUF_DATA[0] = buf_addr = 0xabc, buf_len = 100
final_array[0] = buf_addr = ( 0xabc + 8 ), buf_len = 100 - ( 8 + 10 );
We need to implement a generic solution that can handle all the varying buffer length original array structures while preparing the final array. Can someone please assist me here ? I am able to handle the case 2, but stuck on attempting case 1 and any few other unknown corner cases.
void adjust_my_original_buffer ( T_BUF_DATA *data, int num_bufs )
{
T_BUF_DATA final_array[100];
int idx = 0;
for ( int i = 0 ; i < num_bufs; i++ )
{
// prepare the final array
}
}
Something like the following should work. The idea is to skip whole SG entries at the start (keeping track of remaining initial bytes to be skipped in initial_skip) and skip whole SG entries at the end (keeping track of remaining final bytes to be skipped in final_skip) in order to simplify the problem.
After the simplification, there may be 0, 1 or more SG entries remaining, indicated by the adjusted num_bufs, and the adjusted orig points to the first remaining entry. If there is at least one remaining SG entry, the first entry's data_len will be greater than initial_skip and the last entry's data_len will be greater than final_skip. If there is exactly one remaining SG entry, an additional test is required to check that its data_len is greater than initial_skip + final_skip, reducing the number of remaining SG entries to zero if that is not the case.
A loop copies the remaining SG entries from orig to final and the if statements within the loop adjust the first and last SG entries (which might be a single SG entry).
Finally, the function returns the length of the final SG list, which could be 0 if everything was skipped.
int adjust_my_original_buffer ( const T_BUF_DATA * restrict orig, T_BUF_DATA * restrict final, int num_bufs )
{
int initial_skip;
int final_skip;
// Skip initial bytes.
initial_skip = 8;
while ( num_bufs && orig[0].data_len <= initial_skip )
{
initial_skip -= orig[0].data_len;
orig++;
num_bufs--;
}
// Skip final bytes.
final_skip = 10;
while ( num_bufs && orig[num_bufs - 1].data_len <= final_skip )
{
final_skip -= orig[num_bufs - 1].data_len;
num_bufs--;
}
// If single SG entry remaining, check its length.
if ( num_bufs == 1 && data[0].data_len <= initial_skip + final_skip )
{
// Singleton SG entry is too short.
num_bufs = 0;
}
// Copy SG entries to final list, adjusting first and last entry.
for ( int i = 0; i < num_bufs; i++ )
{
final[i] = orig[i]; // Copy SG entry.
if ( i == 0 )
{
// Adjust first SG entry.
final[i].data_addr = (char *)final[i].data_addr + initial_skip;
final[i].data_len -= initial_skip;
}
if ( i == num_bufs - 1 )
{
// Adjust last SG entry.
final[i].data_len -= final_skip;
}
}
return num_bufs;
}

How to implement bitwise of 3-states bit operators to any-size of memory while maximizing size effectivity?

I can use 2 bits to every 3-states bit to implement it, [00 - first, 10 - second, 11\01 - third], but when the second bit is enabled then the first one is useless. In theory there's implementation that will outperform this method (The 2 bits I mentioned) in size by 37%. (Which is 1-log3(2))
The code I already tried:
#define uint unsigned int
uint set( uint x, uint place, uint value ) {
double result = ( double )x;
result /= pow( 3, place );
result += value - ( ( uint )result ) % 3;
return result * pow( 3, place );
}
uint get( uint x, uint place ) {
return ( ( uint )( ( ( double )x ) / pow( 3, place ) ) ) % 3;
}
int main( ) {
uint s = 0;
for ( int i = 0; i < 20; ++i )
s = set( s, i, i % 3 );
for ( int i = 0; i < 20; ++i )
printf( "get( s, %d ) -> %u\n", i, get( s, i ) );
}
Which prints:
get( s, 0 ) -> 0
get( s, 1 ) -> 1
get( s, 2 ) -> 2
get( s, 3 ) -> 0
...
get( s, 16 ) -> 1
get( s, 17 ) -> 2
get( s, 18 ) -> 0
get( s, 19 ) -> 1
This method saves 20% in size. (1-32/40 - 40 bits is required to do it with the first method I mentioned) In theory when the capacity grows the effectivity grows too. (Towards 37% of course)
How I can implement similar 3-states bitwise method to data of any-size and to maximize size effectivity? If I will use the data as array of uints and use this method on them, I will only get 20% effectivity. (Or lower if the data's size isn't multiplied by 4)
NOTE: The only thing I need is size effectivity, I don't care about speed performance. (Well except if you choose to use BigInteger instead of uint)
log32 is irrelevant.
The maximal possible efficiency for representing 3-valued units is log23 bits per unit, and the compression from 2 bits per unit is (2-log23))/2, which is roughly 20.75%. So 20% is pretty good.
You shouldn't use pow for integer exponentiation; aside from being slow, it is sometimes off by 1ULP which can be enough to make it off by 1 once you coerce it to an integer. But there's no need for all that work either; you can compress five 3-state values into a byte (35 = 243 < 256), and its straightforward to build a lookup table with 256 entries, one for each possible byte value.
With the LUT, you can extract a 3-state value from a large vector:
/* All error checking omitted */
uint8_t LUT[243][5] = { {0,0,0,0,0}, {1,0,0,0,0}, ... };
uint8_t extract(const uint8_t* data, int offset) {
return LUT[data[offset/5]][offset%5];
}
By the way, if a 1215-byte lookup-table is to be considered "big" (which seems odd, given that you're talking about a data vector of 1GB), it's easy enough to compress it by a factor of 4, although it complicates the table construction
/* All error checking omitted */
uint8_t LUT[] = { /* Left as an exercise */ };
uint8_t extract(const uint8_t* data, unsigned offset) {
unsigned index = data[offset/5] * 5 + offset % 5;
return (LUT[index / 4] >> (2 * (index % 4))) & 3;
}
In additional to rici's answer I want to post the code I did, which can help too: (simplified one)
uint8_t ft[ 5 ] = { 1, 3, 3 * 3, 3 * 3 * 3, 3 * 3 * 3 * 3 };
void set( uint8_t *data, int offset, int value ) {
uint8_t t1 = data[ offset / 5 ], t2 = ft[ offset % 5 ], u8 = t1 / t2;
u8 += value - u8 % 3;
data[ offset / 5 ] = t1 + ( u8 - t1 / t2 )*t2;
}
uint8_t get( uint8_t *data, int offset ) {
return data[ offset / 5 ] / ft[ offset % 5 ] % 3;
}
Instead of the big look up table, I re-implemented the pow method just safer and faster one, and added set function too.

algorithm to efficiently send request to nodes

I want to uniform distribute traffic to various nodes depending on the configuration of each node.There can be max 100 nodes and percentage of traffic to be distributed to multiple nodes
can be configured .
So say,if there are 4 nodes :-
node 1 - 20
node 2 - 50
node 3 - 10
node 4 - 20
------------
sum - 100
------------
Sum of value of all nodes should be 100.
Example :-
node 1 - 50
node 2 - 1
node 3 - 1
node 4 - 1
.
.
.
node 100 - 1
In above configuration there are total 51 nodes. node 1 is 50 and rest 50 nodes are configured to 1.
In one senario , request can be distributed in below pattern :-
node1,node2,node3,node4,node5,....,node51,node1,node1,node1,node1,node1,node1,node1,......
Above distribution is inefficient because we are sending too much traffic continously to node1,
which may result in rejection of request by node1.
In another senario , request can be distributed in below pattern :-
node1,node2,node1,node3,node1,node4,node1,node5,node1,node6,node1,node7,node1,node8......
In above senario request is dributed more effciently.
I have found below code but not able to understand the idea behind it.
func()
{
for(int itr=1;itr<=total_requests+1;itr++)
{
myval = 0;
// Search the node that needs to be incremented
// to best approach the rates of all branches
for(int j=0;j<Total_nodes;j++)
{
if((nodes[j].count*100/itr > nodes[j].value) ||
((nodes[j].value - nodes[j].count*100/itr) < myval) ||
((nodes[j].value==0 && nodes[j].count ==0 )))
continue;
cand = j;
myval = abs((long)(nodes[j].count*100/itr - nodes[j].value));
}
nodes[cand].count++;
}
return nodes[cand].nodeID;
}
In above code , total_requests is total number of request received till now.
total_requests variable will be incremented everytime , consider it as a global value for understanding purpose.
Total_nodes , is the total number of nodes configured and each node is represented using following structure.
nodes is a struct :-
struct node{
int count;
int value;
int nodeID;
};
For example :-
If 4 nodes are configured :-
node 1 - 20
node 2 - 50
node 3 - 10
node 4 - 20
------------
sum - 100
------------
There will four nodes[4] created with following values:-
node1{
count = 0;
value = 20;
nodeID = 1;
};
node2{
count = 0;
value = 50;
nodeID = 2;
};
node3{
count = 0;
value = 10;
nodeID = 3;
};
node4{
count = 0;
value = 20;
nodeID = 4;
};
Could you please explain me the algorithm or idea of how it is actually distributing it efficiently.
nodes[j].count*100/itr is the floor of the percentage of requests that node j has answered so far. nodes[j].value is the percentage of requests that node j should answer. The code that you posted looks for the node lagging the furthest behind its target percentage (more or less, subject to the wobbliness of integer division) and assigns it the next request.
Hmmm. Seems that as you reach 100 nodes they must each take 1% of the traffic ?
I honestly have no idea what the function you provide is doing. I assume it's trying to find the node which is furthest from its long-term average load. But if total_requests is the total to date, then I don't get what the outer for(int itr=1;itr<=total_requests+1;itr++) loop is doing, unless that's actually part of some test to show how it distributes load ?
Anyway, basically what you are doing is similar to constucting a non-uniform random sequence. With up to 100 nodes, if I can assume (for a moment) that 0..999 gives sufficient resolution, then you could use an "id_vector[]" with 1000 node-ids, which is filled with n1 copies of node-1's ID, n2 copies of node-2's ID, and so on -- where node-1 is to receive n1/1000 of the traffic, and so on. The decision process is then very, very simple -- pick id_vector[random() % 1000]. Over time, the nodes will receive about the right amount of traffic.
If you are unhappy with randomly distributing traffic, then you seed the id_vector with node-ids so that you can select by "round-robin" and get a suitable frequency for each node. One way to do that would be to randomly shuffle the id_vector constructed as above (and, perhaps, reshuffle occasionally, so that if one shuffle is "bad", you aren't stuck with it). Or you could do a one-time leaky-bucket thing and fill the id_vector from that. Each time around the id_vector this guarantees each node will receive its allotted number of requests.
The finer grain you make the id_vector, the better control you get over the short term frequency of requests to each node.
Mind you, all the above rather assumes that the relative load for the nodes is constant. If not, then you would need to (very now and then ?) adjust the id_vector.
Edit to add more detail as requested...
...suppose we have just 5 nodes, but we express the "weight" of each one as n/1000, to allow for up to 100 nodes. Suppose they have IDs 1..5, and weights:
ID=1, weight = 100
ID=2, weight = 375
ID=3, weight = 225
ID=4, weight = 195
ID=5, weight = 105
Which, obviously, add up to 1000.
So we construct an id_vector[1000] so that:
id_vector[ 0.. 99] = 1 -- first 100 entries = ID 1
id_vector[100..474] = 2 -- next 375 entries = ID 2
id_vector[475..699] = 3 -- next 225 entries = ID 3
id_vector[700..894] = 4 -- next 195 entries = ID 4
id_vector[100..999] = 5 -- last 105 entries = ID 5
And now if we shuffle id_vector[] we get a random sequence of node selections, but over 1000 requests, the right average frequency of requests to each node.
For the amusement value I had a go at a "leaky bucket" to see how well it could maintain a steady frequency of requests to each node, by filling the id_vector using one leaky bucket for each node. The code to do this, and see how well it does, and how well the simple random version does, is enclosed below.
Each leaky bucket has a cc count of the number of requests that should be sent (to other nodes) before the next request is sent to this one. Each time a request is dispatched, all buckets have their cc count decremented, and the node whose bucket has the smallest cc (or smallest id, if the cc are equal) is sent the request, and at that point the node's bucket's cc is recharged. (Each request causes all the buckets to drip once, and the bucket for the chosen node is recharged.)
The cc is the integer part of the bucket's "contents". The initial value for cc is q = 1000 / w, where w is the node's weight. And each time the bucket is recharged, q is added to cc. To do things precisely, however, we need to deal with the remainder r = 1000 % w... or to put it another way, the "contents" have a fractional part -- which is is where cr comes in. The true value of the contents is cc + cr/w (where cr/w is a true fraction, not an integer division). The initial value of that is cc = q and cr = r. Each time the bucket is recharged, q is added to cc, and r is added to cr. When cr/w >= 1/2, we round up, so cc +=1, and cr -= w (adding one to the integer part is balanced by subtracting 1 -- ie w/w -- from the fractional part). To test for cr/w >= 1/2, the code actually tests (cr * 2) >= w. Hopefully the bucket_recharge() function will make sense (now).
The leaky-bucket is run 1000 times to fill the id_vector[]. The little bit of testing shows that this maintains a pretty steady frequency for all nodes, and an exact number of packets per node every time around the id_vector[] cycle.
The little bit of testing shows that the random() shuffle approach has much more variable frequency, within each id_vector[] cycle, but still provides an exact number of packets per node for each cycle.
The steadyness of the leaky bucket assumes a steady stream of incoming requests. Which may be an entirely unrealistic assumption. If the requests arrive in large (large compared to the id_vector[] cycle, 1000 in this example) bursts, then the variability of the (simple) random() shuffle approach may be dwarfed by the variability in request arrival !
enum
{
n_nodes = 5, /* number of nodes */
w_res = 1000, /* weight resolution */
} ;
struct node_bucket
{
int id ; /* 1 origin */
int cc ; /* current count */
int cr ; /* current remainder */
int q ; /* recharge -- quotient */
int r ; /* recharge -- remainder */
int w ; /* weight */
} ;
static void bucket_recharge(struct node_bucket* b) ;
static void node_checkout(int weights[], int id_vector[], bool rnd) ;
static void node_shuffle(int id_vector[]) ;
/*------------------------------------------------------------------------------
* To begin at the beginning...
*/
int
main(int argc, char* argv[])
{
int node_weights[n_nodes] = { 100, 375, 225, 195, 105 } ;
int id_vector[w_res] ;
int cx ;
struct node_bucket buckets[n_nodes] ;
/* Initialise the buckets -- charged
*/
cx = 0 ;
for (int id = 0 ; id < n_nodes ; ++id)
{
struct node_bucket* b ;
b = &buckets[id] ;
b->id = id + 1 ; /* 1 origin */
b->w = node_weights[id] ;
cx += b->w ;
b->q = w_res / b->w ;
b->r = w_res % b->w ;
b->cc = 0 ;
b->cr = 0 ;
bucket_recharge(b) ;
} ;
assert(cx == w_res) ;
/* Run the buckets for one cycle to fill the id_vector
*/
for (int i = 0 ; i < w_res ; ++i)
{
int id ;
id = 0 ;
buckets[id].cc -= 1 ; /* drip */
for (int jd = 1 ; jd < n_nodes ; ++jd)
{
buckets[jd].cc -= 1 ; /* drip */
if (buckets[jd].cc < buckets[id].cc)
id = jd ;
} ;
id_vector[i] = id + 1 ; /* '1' origin */
bucket_recharge(&buckets[id]) ;
} ;
/* Diagnostics and checking
*
* First, check that the id_vector contains exactly the right number of
* each node, and that the bucket state at the end is the same (apart from
* cr) as it is at the beginning.
*/
int nf[n_nodes] = { 0 } ;
for (int i = 0 ; i < w_res ; ++i)
nf[id_vector[i] - 1] += 1 ;
for (int id = 0 ; id < n_nodes ; ++id)
{
struct node_bucket* b ;
b = &buckets[id] ;
printf("ID=%2d weight=%3d freq=%3d (cc=%3d cr=%+4d q=%3d r=%3d)\n",
b->id, b->w, nf[id], b->cc, b->cr, b->q, b->r) ;
} ;
node_checkout(node_weights, id_vector, false /* not random */) ;
/* Try the random version -- with shuffled id_vector.
*/
int iv ;
iv = 0 ;
for (int id = 0 ; id < n_nodes ; ++id)
{
for (int i = 0 ; i < node_weights[id] ; ++i)
id_vector[iv++] = id + 1 ;
} ;
assert(iv == 1000) ;
for (int s = 0 ; s < 17 ; ++s)
node_shuffle(id_vector) ;
node_checkout(node_weights, id_vector, true /* random */) ;
return 0 ;
} ;
static void
bucket_recharge(struct node_bucket* b)
{
b->cc += b->q ;
b->cr += b->r ;
if ((b->cr * 2) >= b->w)
{
b->cc += 1 ;
b->cr -= b->w ;
} ;
} ;
static void
node_checkout(int weights[], int id_vector[], bool rnd)
{
struct node_test
{
int last_t ;
int count ;
int cycle_count ;
int intervals[w_res] ;
} ;
struct node_test tests[n_nodes] = { { 0 } } ;
printf("\n---Test Run: %s ---\n", rnd ? "Random Shuffle" : "Leaky Bucket") ;
/* Test run
*/
int s ;
s = 0 ;
for (int t = 1 ; t <= (w_res * 5) ; ++t)
{
int id ;
id = id_vector[s++] - 1 ;
if (tests[id].last_t != 0)
tests[id].intervals[t - tests[id].last_t] += 1 ;
tests[id].count += 1 ;
tests[id].last_t = t ;
if (s == w_res)
{
printf("At time %4d\n", t) ;
for (id = 0 ; id < n_nodes ; ++id)
{
struct node_test* nt ;
long total_intervals ;
nt = &tests[id] ;
total_intervals = 0 ;
for (int i = 0 ; i < w_res ; ++i)
total_intervals += (long)i * nt->intervals[i] ;
printf(" ID=%2d weight=%3d count=%4d(+%3d) av=%6.2f vs %6.2f\n",
id+1, weights[id], nt->count, nt->count - nt->cycle_count,
(double)total_intervals / nt->count,
(double)w_res / weights[id]) ;
nt->cycle_count = nt->count ;
for (int i = 0 ; i < w_res ; ++i)
{
if (nt->intervals[i] != 0)
{
int h ;
printf(" %6d x %4d ", i, nt->intervals[i]) ;
h = ((nt->intervals[i] * 75) + ((nt->count + 1) / 2))/
nt->count ;
while (h-- != 0)
printf("=") ;
printf("\n") ;
} ;
} ;
} ;
if (rnd)
node_shuffle(id_vector) ;
s = 0 ;
} ;
} ;
} ;
static void
node_shuffle(int id_vector[])
{
for (int iv = 0 ; iv < (w_res - 1) ; ++iv)
{
int is, s ;
is = (int)(random() % (w_res - iv)) + iv ;
s = id_vector[iv] ;
id_vector[iv] = id_vector[is] ;
id_vector[is] = s ;
} ;
} ;

How to improve upon this?

There are n groups of friends staying in the queue in front of bus station. The i-th group consists of ai men. Also, there is a single bus that works on the route. The size of the bus is x, that is it can transport x men simultaneously.
When the bus comes (it always comes empty) to the bus station, several groups from the head of the queue goes into the bus. Of course, groups of friends don't want to split, so they go to the bus only if the bus can hold the whole group. In the other hand, none wants to lose his position, that is the order of groups never changes.
The question is: how to choose the size x of the bus in such a way that the bus can transport all the groups and everytime when the bus moves off the bus station there is no empty space in the bus (the total number of men inside equals to x)?
Input Format:
The first line contains the only integer n (1≤n≤10^5). The second line contains n space-separated integers a1,a2,…,an (1≤ai≤10^4).
Output Format:
Print all the possible sizes of the bus in the increasing order.
Sample:
8
1 2 1 1 1 2 1 3
Output:
3 4 6 12
I made this code:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int main(void)
{
int max=0,sum=0,i,n;
cin>>n;
int values[100000];
for ( i = 0; i < n; i++ )
{
cin>>values[i];
sum = sum + values[i];
if ( values[i] > max )
max = values[i];
}
int p = 0,j;
int count = 0;
vector<int> final;
for ( i = 0; i < n; i++ )
{
p = p + values[i];
j = 0;
if ( p >= max && sum%p == 0)
{
flag = 0;
while ( j < n )
{
garb = p;
while (garb!= 0)
{
garb = garb - values[j++];
if ( garb < 0 )
flag = 1;
}
}
if ( flag == 0 )
{
final.push_back(p);
count++;
}
}
}
sort(final.begin(),final.end());
for ( j = 0; j < count; j++ )
{
cout<<final[j]<<"\t";
}
return 0;
}
Edit: I did this in which basically, I am checking if the found divisor satisfies the condition, and if at any point of time, I get a negative integer on taking difference with the values, I mark it by using a flag. However, it seems to give me a seg fault now. Why?
I firstly, calculated the maximum value out of the all possible values, and then, I checked if its a divisor of the sum of the values. However, this approach doesn't work for the input as:
10
2 2 1 1 1 1 1 2 1 2
My output is
2 7 14
whereas the output should be
7 14
only.
Any other approach that I can go with?
Thanks!
I can think of the following simple solution (since your present concern is correctness and not time complexity):
Calculate the sum of all ai's (as you are already doing).
Calculate the maximum of all ai's (as you are already doing).
Find all the factors of sum that are > max(ai).
For each factor, iterate through the ai's and check whether the bus condition is satisfied.