Find all puddles on the square (algorithm) - c++

The problem is defined as follows:
You're given a square. The square is lined with flat flagstones size 1m x 1m. Grass surround the square. Flagstones may be at different height. It starts raining. Determine where puddles will be created and compute how much water will contain. Water doesn't flow through the corners. In any area of ​​grass can soak any volume of water at any time.
Input:
width height
width*height non-negative numbers describing a height of each flagstone over grass level.
Output:
Volume of water from puddles.
width*height signs describing places where puddles will be created and places won't.
. - no puddle
# - puddle
Examples
Input:
8 8
0 0 0 0 0 1 0 0
0 1 1 1 0 1 0 0
0 1 0 2 1 2 4 5
0 1 1 2 0 2 4 5
0 3 3 3 3 3 3 4
0 3 0 1 2 0 3 4
0 3 3 3 3 3 3 0
0 0 0 0 0 0 0 0
Output:
11
........
........
..#.....
....#...
........
..####..
........
........
Input:
16 16
8 0 1 0 0 0 0 2 2 4 3 4 5 0 0 2
6 2 0 5 2 0 0 2 0 1 0 3 1 2 1 2
7 2 5 4 5 2 2 1 3 6 2 0 8 0 3 2
2 5 3 3 0 1 0 3 3 0 2 0 3 0 1 1
1 0 1 4 1 1 2 0 3 1 1 0 1 1 2 0
2 6 2 0 0 3 5 5 4 3 0 4 2 2 2 1
4 2 0 0 0 1 1 2 1 2 1 0 4 0 5 1
2 0 2 0 5 0 1 1 2 0 7 5 1 0 4 3
13 6 6 0 10 8 10 5 17 6 4 0 12 5 7 6
7 3 0 2 5 3 8 0 3 6 1 4 2 3 0 3
8 0 6 1 2 2 6 3 7 6 4 0 1 4 2 1
3 5 3 0 0 4 4 1 4 0 3 2 0 0 1 0
13 3 6 0 7 5 3 2 21 8 13 3 5 0 13 7
3 5 6 2 2 2 0 2 5 0 7 0 1 3 7 5
7 4 5 3 4 5 2 0 23 9 10 5 9 7 9 8
11 5 7 7 9 7 1 0 17 13 7 10 6 5 8 10
Output:
103
................
..#.....###.#...
.......#...#.#..
....###..#.#.#..
.#..##.#...#....
...##.....#.....
..#####.#..#.#..
.#.#.###.#..##..
...#.......#....
..#....#..#...#.
.#.#.......#....
...##..#.#..##..
.#.#.........#..
......#..#.##...
.#..............
................
I tried different ways. Floodfill from max value, then from min value, but it's not working for every input or require code complication. Any ideas?
I'm interesting algorithm with complexity O(n^2) or o(n^3).

Summary
I would be tempted to try and solve this using a disjoint-set data structure.
The algorithm would be to iterate over all heights in the map performing a floodfill operation at each height.
Details
For each height x (starting at 0)
Connect all flagstones of height x to their neighbours if the neighbour height is <= x (storing connected sets of flagstones in the disjoint set data structure)
Remove any sets that connected to the grass
Mark all flagstones of height x in still remaining sets as being puddles
Add the total count of flagstones in remaining sets to a total t
At the end t gives the total volume of water.
Worked Example
0 0 0 0 0 1 0 0
0 1 1 1 0 1 0 0
0 1 0 2 1 2 4 5
0 1 1 2 0 2 4 5
0 3 3 3 3 3 3 4
0 3 0 1 2 0 3 4
0 3 3 3 3 3 3 0
0 0 0 0 0 0 0 0
Connect all flagstones of height 0 into sets A,B,C,D,E,F
A A A A A 1 B B
A 1 1 1 A 1 B B
A 1 C 2 1 2 4 5
A 1 1 2 D 2 4 5
A 3 3 3 3 3 3 4
A 3 E 1 2 F 3 4
A 3 3 3 3 3 3 A
A A A A A A A A
Remove flagstones connecting to the grass, and mark remaining as puddles
1
1 1 1 1
1 C 2 1 2 4 5 #
1 1 2 D 2 4 5 #
3 3 3 3 3 3 4
3 E 1 2 F 3 4 # #
3 3 3 3 3 3
Count remaining set size t=4
Connect all of height 1
G
C C C G
C C 2 D 2 4 5 #
C C 2 D 2 4 5 #
3 3 3 3 3 3 4
3 E E 2 F 3 4 # #
3 3 3 3 3 3
Remove flagstones connecting to the grass, and mark remaining as puddles
2 2 4 5 #
2 2 4 5 #
3 3 3 3 3 3 4
3 E E 2 F 3 4 # # #
3 3 3 3 3 3
t=4+3=7
Connect all of height 2
A B 4 5 #
A B 4 5 #
3 3 3 3 3 3 4
3 E E E E 3 4 # # #
3 3 3 3 3 3
Remove flagstones connecting to the grass, and mark remaining as puddles
4 5 #
4 5 #
3 3 3 3 3 3 4
3 E E E E 3 4 # # # #
3 3 3 3 3 3
t=7+4=11
Connect all of height 3
4 5 #
4 5 #
E E E E E E 4
E E E E E E 4 # # # #
E E E E E E
Remove flagstones connecting to the grass, and mark remaining as puddles
4 5 #
4 5 #
4
4 # # # #
After doing this for heights 4 and 5 nothing will remain.
A preprocessing step to create lists of all locations with each height should mean that the algorithm is close to O(n^2).

This seems to be working nicely. The idea is it is a recursive function, that checks to see if there is an "outward flow" that will allow it to escape to the edge. If the values that do no have such an escape will puddle. I tested it on your two input files and it works quite nicely. I copied the output for these two files for you. Pardon my nasty use of global variables and what not, I figured it was the concept behind the algorithm that mattered, not good style :)
#include <fstream>
#include <iostream>
#include <vector>
using namespace std;
int SIZE_X;
int SIZE_Y;
bool **result;
int **INPUT;
bool flowToEdge(int x, int y, int value, bool* visited) {
if(x < 0 || x == SIZE_X || y < 0 || y == SIZE_Y) return true;
if(visited[(x * SIZE_X) + y]) return false;
if(value < INPUT[x][y]) return false;
visited[(x * SIZE_X) + y] = true;
bool left = false;
bool right = false;
bool up = false;
bool down = false;
left = flowToEdge(x-1, y, value, visited);
right = flowToEdge(x+1, y, value, visited);
up = flowToEdge(x, y+1, value, visited);
down = flowToEdge(x, y-1, value, visited);
return (left || up || down || right);
}
int main() {
ifstream myReadFile;
myReadFile.open("test.txt");
myReadFile >> SIZE_X;
myReadFile >> SIZE_Y;
INPUT = new int*[SIZE_X];
result = new bool*[SIZE_X];
for(int i = 0; i < SIZE_X; i++) {
INPUT[i] = new int[SIZE_Y];
result[i] = new bool[SIZE_Y];
for(int j = 0; j < SIZE_Y; j++) {
int someInt;
myReadFile >> someInt;
INPUT[i][j] = someInt;
result[i][j] = false;
}
}
for(int i = 0; i < SIZE_X; i++) {
for(int j = 0; j < SIZE_Y; j++) {
bool visited[SIZE_X][SIZE_Y];
for(int k = 0; k < SIZE_X; k++)//You can avoid this looping by using maps with pairs of coordinates instead
for(int l = 0; l < SIZE_Y; l++)
visited[k][l] = 0;
result[i][j] = flowToEdge(i,j, INPUT[i][j], &visited[0][0]);
}
}
for(int i = 0; i < SIZE_X; i++) {
cout << endl;
for(int j = 0; j < SIZE_Y; j++)
cout << result[i][j];
}
cout << endl;
}
The 16 by 16 file:
1111111111111111
1101111100010111
1111111011101011
1111000110101011
1011001011101111
1110011111011111
1100000101101011
1010100010110011
1110111111101111
1101101011011101
1010111111101111
1110011010110011
1010111111111011
1111110110100111
1011111111111111
1111111111111111
The 8 by 8 file
11111111
11111111
11011111
11110111
11111111
11000011
11111111
11111111
You could optimize this algorithm easily and considerably by doing several things. A: return true immediately upon finding a route would speed it up considerably. You could also connect it globally to the current set of results so that any given point would only have to find a flow point to an already known flow point, and not all the way to the edge.
The work involved, each n will have to exam each node. However, with optimizations, we should be able to get this much lower than n^2 for most cases, but it still an n^3 algorithm in the worst case... but creating this would be very difficult(with proper optimization logic... dynamic programming for the win!)
EDIT:
The modified code works for the following circumstances:
8 8
1 1 1 1 1 1 1 1
1 0 0 0 0 0 0 1
1 0 1 1 1 1 0 1
1 0 1 0 0 1 0 1
1 0 1 1 0 1 0 1
1 0 1 1 0 1 0 1
1 0 0 0 0 1 0 1
1 1 1 1 1 1 1 1
And these are the results:
11111111
10000001
10111101
10100101
10110101
10110101
10000101
11111111
Now when we remove that 1 at the bottom we want to see no puddling.
8 8
1 1 1 1 1 1 1 1
1 0 0 0 0 0 0 1
1 0 1 1 1 1 0 1
1 0 1 0 0 1 0 1
1 0 1 1 0 1 0 1
1 0 1 1 0 1 0 1
1 0 0 0 0 1 0 1
1 1 1 1 1 1 0 1
And these are the results
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1

Related

Pandas: count when condition is met in subgroups

I have a dataframe that looks like:
subgroup value
0 1 0
1 1 1
2 1 1
3 1 0
4 2 0
5 2 0
6 2 0
7 3 0
8 3 1
9 3 0
10 3 0
I need to add a column that add 1 whenever there is at least one value different than 0 in the different subgroups. Please, note that if the value 1 is repeated more than once in the same subgroup, it doesn't affect the count.
The result should be:
subgroup value count
0 1 0 1
1 1 1 1
2 1 1 1
3 1 1 1
4 2 0 1
5 2 0 1
6 2 0 1
7 3 0 2
8 3 1 2
9 3 0 2
10 3 0 2
Thank you in advance for your help!
Using shift with -1 and 1 and cumsum the result
mask=(df.value.ne(df.value.shift()))&(df.value.ne(df.value.shift(-1)))
mask.cumsum()
Out[18]:
0 1
1 1
2 1
3 1
4 1
5 1
6 1
7 1
8 2
9 2
10 2
Name: value, dtype: int32
Using merge and groupby
df.merge(df.groupby('subgroup').value.sum().gt(0).cumsum().reset_index(name='out'))
subgroup value out
0 1 0 1
1 1 1 1
2 1 1 1
3 1 0 1
4 2 0 1
5 2 0 1
6 2 0 1
7 3 0 2
8 3 1 2
9 3 0 2
10 3 0 2

QuickSort crash on odd sized set

I've been trying to implement quick sort and have been having a lot of problems. I even copied a lot of implementations and accepted answers from the net and they ALL crash on odd sized array/vector if you run it enough times (each time I run I run quick sort against random numbers to be sorted... rather than pretend my code works just cuz it can sort one particular set of numbers).
Here is my code and also prints to help debug the error.
template <typename T>
void quickSortMidPivot(vector<T>&vec, size_t left, size_t right)
{
mcount++;
if(right - left < 1)
return;
//crash all the time
//if(left >= right)
// return;
size_t l = left;
size_t r = right;
T pivot = vec[left + ((right-left)/2)];
cout << endl << "PivotValue:" << pivot << endl;
while (l <= r)
{
while (vec[l] < pivot)
l++;
while (vec[r] > pivot)
r--;
if (l <= r) {
cout << endl << "swap:" << vec[l] << "&" << vec[r] << endl;
std::swap(vec[l], vec[r]);
l++;
r--;
for (int i =left; i<=right; i++)
cout << vec[i] << " ";
}
}
cout << endl << "left:" << left << " r:" << r << endl;
cout << "l:" << l << " right:" << right << endl;
if(left < r)
quickSortMidPivot(vec, left, r);
if(l < right)
quickSortMidPivot(vec, l, right);
}
//in main
quickSortMidPivot(dsVector, 0, dsVector.size() - 1);
mcount is a global just so that I can count number of recursive calls. Help figure out most effective implementation...
Here is some debug info.
When run on even sized vector.
Test values are (PRE-SORTING):
8 4 6 5 2 4 1 2
PivotValue:5
swap:8&2
2 4 6 5 2 4 1 8
swap:6&1
2 4 1 5 2 4 6 8
swap:5&4
2 4 1 4 2 5 6 8
left:0 r:4
l:5 right:7
PivotValue:1
swap:2&1
1 4 2 4 2
left:0 r:0
l:1 right:4
PivotValue:2
swap:4&2
2 2 4 4
swap:2&2
2 2 4 4
left:1 r:1
l:3 right:4
PivotValue:4
swap:4&4
4 4
left:3 r:3
l:4 right:4
PivotValue:6
swap:6&6
5 6 8
left:5 r:5
l:7 right:7
# Recursions:5 0
Data Sorted.
Sorted test values are (POST-SORTING):
1 2 2 4 4 5 6 8
Here is case with odd sized array (9). Works 90% of time.
Test values are (PRE-SORTING):
7 7 5 6 5 8 9 5 8
PivotValue:5
swap:7&5
5 7 5 6 5 8 9 7 8
swap:7&5
5 5 5 6 7 8 9 7 8
swap:5&5
5 5 5 6 7 8 9 7 8
left:0 r:1
l:3 right:8
PivotValue:5
swap:5&5
5 5
left:0 r:0
l:1 right:1
PivotValue:8
swap:8&8
6 7 8 9 7 8
swap:9&7
6 7 8 7 9 8
left:3 r:6
l:7 right:8
PivotValue:7
swap:7&7
6 7 8 7
left:3 r:4
l:5 right:6
PivotValue:6
swap:6&6
6 7
left:3 r:2
l:4 right:4
PivotValue:8
swap:8&7
7 8
left:5 r:5
l:6 right:6
PivotValue:9
swap:9&8
8 9
left:7 r:7
l:8 right:8
# Recursions:7 0
Data Sorted.
Sorted test values are (POST-SORTING):
5 5 5 6 7 7 8 8 9
Here is print output for when odd sized (9) vector input causes crash.
Test values are (PRE-SORTING):
8 3 2 3 9 3 8 1 5
PivotValue:9
swap:9&5
8 3 2 3 5 3 8 1 9
left:0 r:7
l:8 right:8
PivotValue:3
swap:8&1
1 3 2 3 5 3 8 8
swap:3&3
1 3 2 3 5 3 8 8
swap:3&3
1 3 2 3 5 3 8 8
left:0 r:2
l:4 right:7
PivotValue:3
swap:3&2
1 2 3
left:0 r:1
l:2 right:2
PivotValue:1
swap:1&1
1 2
swap:2&0
1 0
swap:3&0
1 0
swap:3&1
1 0
swap:5&0
1 0
swap:3&1
1 0
swap:8&0
1 0
swap:8&0
1 0
swap:9&0
1 0
swap:7274596&0
1 0
swap:666050571&0
1 0
swap:369110150&0
1 0
swap:1&0
1 0
swap:1&0
1 0
swap:110&0
1 0
swap:649273354&0
1 0
swap:134229126&0
1 0
swap:3764640&0
1 0
swap:2293216&0
1 0
swap:8&0
1 0
swap:2&0
1 0
swap:649273354&0
1 0
swap:134229127&0
1 0
swap:3764672&0
1 0
swap:3764608&0
1 0
swap:3&0
1 0
swap:649273354&0
1 0
swap:134229127&0
1 0
swap:3764704&0
1 0
swap:3764640&0
1 0
swap:2&0
1 0
swap:649273354&0
1 0
swap:134229127&0
1 0
swap:3764736&0
1 0
swap:3764672&0
1 0
swap:3&0
1 0
swap:649273354&0
1 0
swap:134229127&0
1 0
swap:3764768&0
1 0
swap:3764704&0
1 0
swap:9&0
1 0
swap:649273354&0
1 0
swap:134229127&0
1 0
swap:3764800&0
1 0
swap:3764736&0
1 0
swap:3&0
1 0
swap:6619252&0
1 0
swap:649273354&0
1 0
swap:134229127&0
1 0
swap:3764832&0
1 0
swap:3764768&0
1 0
swap:8&0
1 0
swap:666050571&0
1 0
swap:402664583&0
1 0
swap:3765152&0
1 0
swap:3764800&0
1 0
swap:1&0
1 0
swap:900931609&0
1 0
swap:268446854&0
1 0
swap:2046&0
1 0
swap:2046&0
1 0
swap:649273354&0
1 0
swap:134229140&0
1 0
swap:2293216&0
1 0
swap:3764832&0
1 0
swap:5&0
1 0
swap:11399&0
1 0
swap:3735896&0
1 0
swap:3735896&0
1 0
swap:548610060&1
1 0
swap:50342980&0
1 0
swap:6356944&-1
1 0
swap:3735800&-2
1 0
swap:3735648&0
1 0
swap:3735648&-1
1 0
swap:3768320&0
1 0
swap:32768&1
1 0

Pandas groupping values to column

I have dataframe look like this:
a b c d e
0 0 1 2 1 0
1 3 0 0 4 3
2 3 4 0 4 2
3 4 1 0 4 3
4 2 1 3 4 3
5 3 2 0 3 3
6 2 1 1 1 0
7 1 1 0 3 3
8 3 3 3 3 4
9 2 3 4 2 2
I do following command:
df.groupby('A').sum()
And i get:
b c d e
a
0 1 2 1 0
1 1 0 3 3
2 5 8 7 5
3 9 3 14 12
4 1 0 4 3
And after that I want to access
labels = df['A']
But I have an error that there are no such column.
So does pandas have some syntax to get something like this?
a b c d e
0 0 1 2 1 0
1 1 1 0 3 3
2 2 5 8 7 5
3 3 9 3 14 12
4 4 1 0 4 3
I need to sum all values of columns b, c, d, e to column a with the relevant index
You can just access the index with df.index, and add it back into your dataframe as another column.
grouped_df = df.groupby('A').sum()
grouped_df['A'] = grouped_df.index
grouped_df.sum(axis=1)
Alternatively, groupby has 'as_index' option to keep the column 'A'
groupby('A', as_index=False)
or, after groupby, you can use reset_index to put the column 'A' back.

Distribution of M objects in N container

Given a N size array whose elements denotes the capacity of containers ...In how many ways M similar objects can be distributed so that each containers is filled at the end.
for example
for arr={2,1,2,1} N=4 and M=10 there comes out be 35 ways.
Please help me out with this question.
First calculate the sum of the container sizes. I your case 2+1+2+1 = 6 let this be P. Find the number of ways of choosing P objects from M. There are M choices for the first object, M-1 for the second, M-2 for the third etc. This gives use M * (M-1) * ... (M-p+1) or M! / (M-P)!. This will give us more states than you want for example
1 2 | 3 | 4 5 | 6
2 1 | 3 | 4 5 | 6
There is q! ways of arranging q object in q slots so we need to divide by factorial(arr[0]) and factorial(arr[1]) etc. In this case divide by 2! * 1! * 2! * 1! = 4.
I'm getting a very much larger number than 35. 10! / 4! = 151200 divide that by 4 gives 37800, so I'm not sure if I have understood your question correctly.
Ah so looking at the problem you need to find N integers n1, n2, ... ,nN so that n1+n2+...+nN = M and n1>= arr[1], n2>=arr[2].
Looks quite simple let P be as above. Take the first P pills and give the students their minimum number, arr[1], arr[2] etc. You will have M-P pills left, let this be R.
Essentially the problem simplifies to finding N number >=0 which sum to R. This is a classic problem. As its a challenges I won't do the answer for you but if we break the N=4, R=4 answer down you may see the pattern
4 0 0 0 - 1 case starting with 4
3 1 0 0 - 3 cases starting with 3
3 0 1 0
3 0 0 1
2 2 0 0 - 6 cases
2 1 1 0
2 1 0 1
2 0 2 0
2 0 1 1
2 0 0 2
1 3 0 0 - 10 cases
1 2 1 0
1 2 0 1
1 1 2 0
1 1 1 1
1 1 0 2
1 0 3 0
1 0 2 1
1 0 1 2
1 0 0 3
0 4 0 0 - 15 cases
0 3 1 0
0 3 0 1
0 2 2 0
0 2 1 1
0 2 0 2
0 1 3 0
0 1 2 1
0 1 1 2
0 1 0 3
0 0 4 0
0 0 3 1
0 0 2 2
0 0 1 3
0 0 0 4
You should recognise the numbers 1, 3, 6, 10, 15.

J (Tacit) Sieve Of Eratosthenes

I'm looking for a J code to do the following.
Suppose I have a list of random integers (sorted),
2 3 4 5 7 21 45 49 61
I want to start with the first element and remove any multiples of the element in the list then move on to the next element cancel out its multiples, so on and so forth.
Thus the output
I'm looking at is 2 3 5 7 61. Basically a Sieve Of Eratosthenes. Would appreciate if someone could explain the code as well, since I'm learning J and find it difficult to get most codes :(
Regards,
babsdoc
It's not exactly what you ask but here is a more idiomatic (and much faster) version of the Sieve.
Basically, what you need is to check which number is a multiple of which. You can get this from the table of modulos: |/~
l =: 2 3 4 5 7 21 45 49 61
|/~ l
0 1 0 1 1 1 1 1 1
2 0 1 2 1 0 0 1 1
2 3 0 1 3 1 1 1 1
2 3 4 0 2 1 0 4 1
2 3 4 5 0 0 3 0 5
2 3 4 5 7 0 3 7 19
2 3 4 5 7 21 0 4 16
2 3 4 5 7 21 45 0 12
2 3 4 5 7 21 45 49 0
Every pair of multiples gives a 0 on the table. Now, we are not interested in the 0s that correspond to self-modulos (2 mod 2, 3 mod 3, etc; the 0s on the diagonal) so we have to remove them. One way to do this is to add 1s on their place, like so:
=/~ l
1 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0
0 0 0 0 1 0 0 0 0
0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 1
(=/~l) + (|/~l)
1 1 0 1 1 1 1 1 1
2 1 1 2 1 0 0 1 1
2 3 1 1 3 1 1 1 1
2 3 4 1 2 1 0 4 1
2 3 4 5 1 0 3 0 5
2 3 4 5 7 1 3 7 19
2 3 4 5 7 21 1 4 16
2 3 4 5 7 21 45 1 12
2 3 4 5 7 21 45 49 1
This can be also written as (=/~ + |/~) l.
From this table we get the final list of numbers: every number whose column contains a 0, is excluded.
We build this list of exclusions simply by multiplying by column. If a column contains a 0, its product is 0 otherwise it's a positive number:
*/ (=/~ + |/~) l
256 2187 0 6250 14406 0 0 0 18240
Before doing the last step, we'll have to improve this a little. There is no reason to perform long multiplications since we are only interested in 0s and not-0s. So, when building the table, we'll keep only 0s and 1s by taking the "sign" of each number (this is the signum:*):
* (=/~ + |/~) l
1 1 0 1 1 1 1 1 1
1 1 1 1 1 0 0 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 0 1 1
1 1 1 1 1 0 1 0 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
so,
*/ * (=/~ + |/~) l
1 1 0 1 1 0 0 0 1
From the list of exclusion, you just copy:# the numbers to your final list:
l #~ */ * (=/~ + |/~) l
2 3 5 7 61
or,
(]#~[:*/[:*=/~+|/~) l
2 3 5 7 61
Tacit iteration is usually done with the conjunction Power. When the test for completion needs to be something other than hitting a fixpoint, the Do While construction works well.
In this solution filterMultiplesOfHead is applied repeatedly until there are no more numbers not either applied or filtered. Numbers already applied are accumulated in a partial answer. When the list to be processed is empty the partial answer is the result, after stripping off the boxing used to segregate processed from unprocessed data.
filterMultiplesOfHead=: {. (((~: >.)# %~) # ]) }.
appendHead=: (>#[ , {.#>#])/
pass=: appendHead ; filterMultiplesOfHead#>#{:
prep=: a: , <
unfinished=: [: -. a: -: {:
sieve=: [: ; [: pass^:unfinished^:_ prep
sieve 2 3 4 5 7 21 45 49 61
2 3 5 7 61
prep 2 3 4 7 9 10
┌┬────────────┐
││2 3 4 7 9 10│
└┴────────────┘
appendHead prep 2 3 4 7 9 10
2
filterMultiplesOfHead 2 3 4 7 9 10
3 7 9
pass^:2 prep 2 3 4 7 9 10
┌───┬─┐
│2 3│7│
└───┴─┘
sieve 1-.~/:~~.>:?.$~100
2 3 7 11 29 31 41 53 67 73 83 95 97