There is a rectangular grid of coins, with heads being represented by the value 1 and tails being represented by the value 0. You represent this using a 2D integer array table (between 1 to 10 rows/columns, inclusive).
In each move, you choose any single cell (R, C) in the grid (R-th row, C-th column) and flip the coins in all cells (r, c), where r is between 0 and R, inclusive, and c is between 0 and C, inclusive. Flipping a coin means inverting the value of a cell from zero to one or one to zero.
Return the minimum number of moves required to change all the cells in the grid to tails. This will always be possible.
Examples:
1111
1111
returns: 1
01
01
returns: 2
010101011010000101010101
returns: 20
000
000
001
011
returns: 6
This is what i tried:
Since the order of flipping doesn't matter, and making a move on a coin twice is like not making a move at all, we can just find all distinct combinations of flipping coins, and minimizing the size of good combinations(good meaning those that give all tails).
This can be done by making a set consisting of all coins, each represented by an index.(i.e. if there were 20 coins in all, this set would contain 20 elements, giving them an index 1 to 20). Then make all possible subsets and see which of them give the answer(i.e. if making a move on the coins in the subset gives us all tails). Finally, minimize size of the good combinations.
I don't know if I've been able to express myself too clearly... I'll post a code if you want.
Anyway, this method is too time consuming and wasteful, and not possible for no.of coins>20(in my code).
How to go about this?
I think a greedy algorithm suffices, with one step per coin.
Every move flips a rectangular subset of the board. Some coins are included in more subsets than others: the coin at (0,0) upper-left is in every subset, and the coin at lower-right is in only one subset, namely the one which includes every coin.
So, choosing the first move is obvious: flip every coin if the lower-right corner must be flipped. Eliminate that possible move.
Now, the lower-right coin's immediate neighbors, left and above, can only potentially be flipped by a single remaining move. So, if that move must be performed, do it. The order of evaluation of the neighbors doesn't matter, since they aren't really alternatives to each other. However, a raster pattern should suffice.
Repeat until finished.
Here is a C++ program:
#include <iostream>
#include <valarray>
#include <cstdlib>
#include <ctime>
using namespace std;
void print_board( valarray<bool> const &board, size_t cols ) {
for ( size_t i = 0; i < board.size(); ++ i ) {
cout << board[i] << " ";
if ( i % cols == cols-1 ) cout << endl;
}
cout << endl;
}
int main() {
srand( time(NULL) );
int const rows = 5, cols = 5;
valarray<bool> board( false, rows * cols );
for ( size_t i = 0; i < board.size(); ++ i ) board[i] = rand() % 2;
print_board( board, cols );
int taken_moves = 0;
for ( size_t i = board.size(); i > 0; ) {
if ( ! board[ -- i ] ) continue;
size_t sizes[] = { i%cols +1, i/cols +1 }, strides[] = { 1, cols };
gslice cur_move( 0, valarray<size_t>( sizes, 2 ),
valarray<size_t>( strides, 2 ) );
board[ cur_move ] ^= valarray<bool>( true, sizes[0] * sizes[1] );
cout << sizes[1] << ", " << sizes[0] << endl;
print_board( board, cols );
++ taken_moves;
}
cout << taken_moves << endl;
}
Not c++. Agree with #Potatoswatter that the optimal solutition is greedy, but I wondered if a Linear Diophantine System also works. This Mathematica function does it:
f[ei_] := (
xdim = Dimensions[ei][[1]];
ydim = Dimensions[ei][[2]];
(* Construct XOR matrixes. These are the base elements representing the
possible moves *)
For[i = 1, i < xdim + 1, i++,
For[j = 1, j < ydim + 1, j++,
b[i, j] = Table[If[k <= i && l <= j, -1, 0], {k, 1, xdim}, {l, 1, ydim}]
]
];
(*Construct Expected result matrix*)
Table[rv[i, j] = -1, {i, 1, xdim}, {j, 1, ydim}];
(*Construct Initial State matrix*)
Table[eiv[i, j] = ei[[i, j]], {i, 1, xdim}, {j, 1, ydim}];
(*Now Solve*)
repl = FindInstance[
Flatten[Table[(Sum[a[i, j] b[i, j], {i, 1, xdim}, {j, 1, ydim}][[i]][[j]])
eiv[i, j] == rv[i, j], {i, 1, xdim}, {j, 1, ydim}]],
Flatten[Table[a[i, j], {i, 1, xdim}, {j, 1, ydim}]]][[1]];
Table[c[i, j] = a[i, j] /. repl, {i, 1, xdim}, {j, 1, ydim}];
Print["Result ",xdim ydim-Count[Table[c[i, j], {i, 1, xdim}, {j, 1,ydim}], 0, ydim xdim]];)
When called with your examples (-1 instead of 0)
ei = ({
{1, 1, 1, 1},
{1, 1, 1, 1}
});
f[ei];
ei = ({
{-1, 1},
{-1, 1}
});
f[ei];
ei = {{-1, 1, -1, 1, -1, 1, -1, 1, 1, -1, 1, -1, -1, -1, -1, 1, -1,
1, -1, 1, -1, 1, -1, 1}};
f[ei];
ei = ({
{-1, -1, -1},
{-1, -1, -1},
{-1, -1, 1},
{-1, 1, 1}
});
f[ei];
The result is
Result :1
Result :2
Result :20
Result :6
Or :)
Solves a 20x20 random problem in 90 seconds on my poor man's laptop.
Basically, you're taking the N+M-1 coins in the right and bottom borders and solving them, then just calling the algorithm recursively on everything else. This is basically what Potatoswatter is saying to do. Below is a very simple recursive algorithm for it.
Solver(Grid[N][M])
if Grid[N-1][M-1] == Heads
Flip(Grid,N-1,M-1)
for each element i from N-2 to 0 inclusive //This is empty if N is 1
If Grid[i][M-1] == Heads
Flip(Grid,i,M-1)
for each element i from M-2 to 0 inclusive //This is empty if M is 1
If Grid[N-1][i] == Heads
Flip(Grid,N-1,i)
if N>1 and M > 1:
Solver(Grid.ShallowCopy(N-1, M-1))
return;
Note: It probably makes sense to implement Grid.ShallowCopy by just having Solver have arguments for the width and the height of the Grid. I only called it Grid.ShallowCopy to indicate that you should not be passing in a copy of the grid, though C++ won't do that with arrays by default anyhow.
An easy criterion for rectangle(x,y) to be flipped seems to be: exactly when the number of ones in the 2x2 square with top-left square (x,y) is odd.
(code in Python)
def flipgame(grid):
w, h = len(grid[0]), len(grid)
sol = [[0]*w for y in range(h)]
for y in range(h-1):
for x in range(w-1):
sol[y][x] = grid[y][x] ^ grid[y][x+1] ^ grid[y+1][x] ^ grid[y+1][x+1]
for y in range(h-1):
sol[y][w-1] = grid[y][w-1] ^ grid[y+1][w-1]
for x in range(w-1):
sol[h-1][x] = grid[h-1][x] ^ grid[h-1][x+1]
sol[h-1][w-1] = grid[h-1][w-1]
return sol
The 2D array returned has a 1 in position (x,y) if rectangle(x,y) should be flipped, so the number of ones in it is the answer to your original question.
EDIT: To see why it works:
If we do moves (x,y), (x,y-1), (x-1,y), (x-1,y-1), only square (x,y) is inverted. This leads to the code above. The solution must be optimal, as there are 2^(hw) possible configurations of the board and 2^(hw) possible ways to transform the board (assuming every move can be done 0 or 1 times). In other words, there is only one solution, hence the above produces the optimal one.
You could use recursive trials.
You would need at least the move count and to pass a copy of the vector. You'd also want to set a maximum move cutoff to set a limit to the breadth of branches coming out of at each node of the search tree. Note this is a "brute force" approach."
Your general algorithm structure would be:
const int MAX_FLIPS=10;
const unsigned int TREE_BREADTH=10;
int run_recursion(std::vector<std::vector<bool>> my_grid, int current flips)
{
bool found = true;
int temp_val = -1;
int result = -1;
//Search for solution with for loops; if true is found in grid, found=false;
...
if ( ! found && flips < MAX_FLIPS )
{
//flip coin.
for ( unsigned int more_flips=0; more_flips < TREE_BREADTH; more_flips++ )
{
//flip one coin
...
//run recursion
temp_val=run_recursion(my_grid,flips+1)
if ( (result == -1 && temp_val != -1) ||
(temp_val != -1 && temp_val < result) )
result = temp_val;
}
}
return result;
}
...sorry in advance for any typos/minor syntax errors. Wanted to prototype a fast solution for you, not write the full code...
Or easier still, you could just use a brute force of linear trials. Use an outer for loop would be number of trials, inner for loop would be flips in trial. On each loop you'd flip and check if you'd succeeded, recycling your success and flip code from above. Success would short circuit the inner loop. At the end of the inner loop, store the result in the array. If failure after max_moves, store -1. Search for the max value.
A more elegant solution would be to use a multithreading library to start a bunch of threads flipping, and have one thread signal to others when it finds a match, and if the match is lower than the # of steps run thus far in another thread, that thread exits with failure.
I suggest MPI, but CUDA might win you brownie points as it's hot right now.
Hope that helps, good luck!
Related
Problem Statement:
Given an array arr of n integers, count the number of non-empty subsequences of the given array such that their product of maximum element and minimum element is zero. Since this number can be huge, compute it modulo 10 ^ 9 + 7
A subsequence of an array is defined as the sequence obtained by deleting several elements from the array (possible none) without changing the order of the remaining elements.
Example
Given n = 3, arr = [1, 0, – 2].
There are 7 subsequences of arr that are-
[1], minimum = 1, maximum =1 , min * max = 1 .
[1,0] minimum = 0, maximum=1, min * max=0
[ 1,0, – 2], minimum = – 2, maximum =1, min* max = -2.
[0], minimum = 0, maximum =0, min * max=0
[0,-2],minimum=-2,maximum=0, min* max=0,
[1, -2] minimum=-2, maximum=1,min* max=-2
[- 2] minimum =-2 maximum = – 2 , min* max = 4.
There are 3 subsequences whose minimum * maximum = 0 that are
[1, 0], [0], [0, – 2] . Hence the answer is 3.
I tried to come up with a solution, by counting the number of zeroes, positive numbers and negative numbers and then adding possible subsequences(2^n, per count) to an empty variable.
My answer is way off though, it's 10 when the expected answer is 3. Can someone please point out my mistake?
#include<bits/stdc++.h>
using namespace std;
#define int long long
int zeroSubs(vector<int> arr){
int x = 0, y = 0, z = 0, ans = 0;
for(int i = 0; i < arr.size(); i++){
if(arr[i] == 0) z++;
else if(arr[i] < 0) x++;
else y++;
}
ans += ((int)pow(2, z))*((int)pow(2, x));
ans += ((int)pow(2, y))*((int)pow(2, z));
ans += ((int)pow(2, z));
return ans;
}
int32_t main()
{
//directly passed the sample test case as an array
cout<<zeroSubs({1, 0, -2});
return 0;
}
ans += ((1<<z)-1)*((1<<x)-1);
ans += ((1<<y)-1)*((1<<z)-1);
ans += ((1<<z)-1);
Made this slight change in the logic, thanks a lot to everyone for the valuable feedback. It works now.
I am studying Dynamic Programming on GeeksForGeeks and have a problem with Tiles Stacking Problem and the way it is solved
A stable tower of height n is a tower consisting of exactly n tiles of unit height stacked vertically in such a way, that no bigger tile is placed on a smaller tile. An example is shown below :
We have infinite number of tiles of sizes 1, 2, …, m. The task is calculate the number of different stable tower of height n that can be built from these tiles, with a restriction that you can use at most k tiles of each size in the tower.
Note: Two tower of height n are different if and only if there exists a height h (1 <= h <= n), such that the towers have tiles of different sizes at height h.
For example:
Input : n = 3, m = 3, k = 1.
Output : 1
Possible sequences: { 1, 2, 3}.
Hence answer is 1.
Input : n = 3, m = 3, k = 2.
Output : 7
{1, 1, 2}, {1, 1, 3}, {1, 2, 2},
{1, 2, 3}, {1, 3, 3}, {2, 2, 3},
{2, 3, 3}.
The way to solve is to count number of decreasing sequences of length n using numbers from 1 to m where every number can be used at most k times. We can recursively compute count for n using count for n-1.
Declare a 2D array dp[][], where each state dp[i][j] denotes the number of decreasing sequences of length i using numbers from j to m. We need to take care of the fact that a number can be used a most k times. This can be done by considering 1 to k occurrences of a number. Hence our recurrence relation becomes:
Also, we can use the fact that for a fixed j we are using the consecutive values of previous k values of i. Hence, we can maintain a prefix sum array for each state. Now we have got rid of the k factor for each state.
I have read this algorithm for many times but I don't understand it and how to prove the accuracy of it. I have tried to find the guide on the internet but only its variations. Please help me to explain it.
Observe that the largest size tile (m) can appear only at the bottom.
Its appearances are consecutive
Your recurrence becomes:
T(n,m,k) = SIGMA_{i=0,...,k} T(n-i,m-1,k)
Then you have to define the base cases of the recurrence:
T(n,m,1) = // can you tell what this is?
T(n,1,k) = // can you tell what this is?
T(1,m,k) = m // this is easy
We can prove it by forming a logical recurrence:
(A) If the maximum stack height, given m and k is smaller than n, we cannot create any stack.
(B) If only one tile is allowed, we can choose m different sizes for that tile.
(C) If only one size is allowed, if k is greater than or equal to n, we can construct one stack of n tiles of size 1; otherwise, zero stacks.
(D) For each possible count, x, of tiles of size m stacked, we have one way that is multiplied by the number of ways to stack (n - x) tiles, using sizes of at most (m - 1) since we used m.
To convert the recurrence to bottom-up dynamic programming, we initialise the matrix using the base cases of the recurrence, and fill in subsequent entries using its general-case logical branch.
Here's a demonstration of the recurrence in JavaScript (sorry I'm not versed in C++ but the first function, f, which calculates just the count, should be very easy to convert):
// Returns the count
function f(n, m, k){
if (n > m * k)
return 0;
if (n == 1)
return m;
if (m == 1)
return n <= k ? 1 : 0;
let result = 0;
for (let x=0; x<=k; x++)
result += f(n - x, m - 1, k);
return result;
}
// Returns the sequences
function g(n, m, k){
if (n > m * k)
return [];
if (n == 1)
return new Array(m).fill(0).map((_, i) => [i + 1]);
if (m == 1)
return n <= k ? [new Array(n).fill(1)] : [];
let result = [];
for (let x=0; x<=k; x++){
const pfx = new Array(x).fill(m);
const prev = g(n - x, m - 1, k);
for (let s of prev)
result.push(pfx.concat(s));
}
return result;
}
var inputs = [
[3, 3, 1],
[3, 3, 2],
[1, 2, 2]
];
for (let args of inputs){
console.log('' + args);
console.log(f(...args));
console.log(JSON.stringify(g(...args)));
console.log('');
}
There is a common algorithm for solving the knapsack problem using dynamic programming. But it's not work for W=750000000, because there is an error of bad alloc. Any ideas how to solve this problem for my value of W?
int n=this->items.size();
std::vector<std::vector<uint64_t>> dps(this->W + 1, std::vector<uint64_t>(n + 1, 0));
for (int j = 1; j <= n; j++)
for (int k = 1; k <= this->W; k++) {
if (this->items[j - 1]->wts <= k)
dps[k][j] = std::max(dps[k][j - 1], dps[k - this->items[j - 1]->wts][j - 1] + this->items[j - 1]->cost);
else
dps[k][j] = dps[k][j - 1];
}
First of all, you can use only one dimension to solve the knapsack problem. This will reduce your memory from dp[W][n] (n*W space) to dp[W] (W space). You can look here: 0/1 Knapsack Dynamic Programming Optimazion, from 2D matrix to 1D matrix
But, even if you use only dp[W], your W is really high, and might be too much memory. If your items are big, you can use some approach to reduce the number of possible weights. First, realize that you don't need all positions of W, only those such that the sum of weight[i] exists.
For example:
W = 500
weights = [100, 200, 400]
You will never use position dp[473] of your matrix, because the items can occupy only positions p = [0, 100, 200, 300, 400, 500]. It is easy to see that this problem is the same as when:
W = 5
weights = [1,2,4]
Another more complicated example:
W = 20
weights = [5, 7, 8]
Using the same approach as before, you don't need all weights from 0 to 20, because the items can occupy only fill up to positions
p = [0, 5, 7, 5 + 7, 5 + 8, 7 + 8, 5 + 7 + 8]
p = [0, 5, 7, 12, 13, 15, 20]
, and you can reduce your matrix from dp[20] to dp[size of p] = M[7].
You do not show n, but even if we assume it is 1, lets see how much data you are trying to allocate. So, it would be:
W*64*2 // Here we don't consider overhead of the vector
This comes out to be:
750000000*64*2 bits = ~11.1758Gb
I am guessing this is more space then your program will allow. You are going to need to take a new approach. Perhaps try to handle the problem as multiple blocks. Consider the first and second half seperatley, then swap.
I'm required to implement the Dijkstra's algorithm via ADT graph using the adjacency matrix representation for finding a shortest path by enhancing the pseudo code below using either C/C++ language.
procedure Dijkstra(G, w, r, Parent[0:n-1], Dist)
for v← 0 to n-1 do
Dist[v] ← ∞
InTheTree[v] ← .false.
endfor
Parent[r] ←-1
Dist[r] ←0
for Stage ←1 to n-1 do
Select vertex u that minimises Dist[u] over all u such that InTheTree[u] = .false.
InTheTree[u] = .true. // add u to T
for each vertex v such that uv ∈ E do // update Dist[v] and
if .not. InTheTree[v] then // Parent[v] arrays
if Dist[u] + w(uv) < Dist[v]
Dist[v] = Dist[u] + w(uv)
Nearest[v] ←w(uv)
Parent[v] ← u
endif
endif
endfor
endfor
end Dijkstra
Here is my solution of code which is being coded in C++. My lecturer claim that the code does not meet the pseudocode requirements and I'm not sure
where it went wrong so can anyone help me to spot what doesn't match between the code and the pseudocode?
#include <stdio.h>
#include <limits.h>
#define N 9
int minDistance(int dist[], bool sptSet[])
{
int min = INT_MAX, min_index;
for (int n = 0; n < N; n++)
if (sptSet[v] == false && dist[n] <= min)
min = dist[n], min_index = n;
return min_index;
}
int printSolution(int dist[], int v)
{
printf("Vertex Distance from Source\n");
for (int i = 0; i < N; i++)
printf("%d \t\t %d\n", i, dist[i]);
}
void dijkstra(int graph[N][N], int src)
{
int dist[N];
bool sptSet[N];
for (int i = 0; i < N; i++) {
dist[i] = INT_MAX;
sptSet[i] = false;
}
dist[src] = 0;
for (int count = 0; count < N-1; count++)
{
int u = minDistance(dist, sptSet);
sptSet[u] = true;
for (int n = 0; n < N; n++)
if (!sptSet[n] && graph[u][n] && dist[u] != INT_MAX
&& dist[u]+graph[u][n] < dist[n])
dist[n] = dist[u] + graph[u][n];
}
printSolution(dist, N);
}
int main()
{
int graph[N][N] = {{0, 4, 0, 0, 0, 0, 0, 8, 0},
{4, 0, 8, 0, 0, 0, 0, 11, 0},
{0, 8, 0, 7, 0, 4, 0, 0, 2},
{0, 0, 7, 0, 9, 14, 0, 0, 0},
{0, 0, 0, 9, 0, 10, 0, 0, 0},
{0, 0, 4, 0, 10, 0, 2, 0, 0},
{0, 0, 0, 14, 0, 2, 0, 1, 6},
{8, 11, 0, 0, 0, 0, 1, 0, 7},
{0, 0, 2, 0, 0, 0, 6, 7, 0}
};
dijkstra(graph, 0);
return 0;
}
The most obvious mismatch is that your code does not have anything corresponding to to the pseudocode's Parent array. I take that as an output parameter, though it's not explicitly so marked. As you seem to have recognized, it is not needed for computing only the lengths of the minimum paths, but it contains all the information about the actual steps in those paths, and that is often desired information.
You also have no analog of the pseudocode's Nearest; it would be a bit mean to complain about that, though, as Nearest is not a parameter to the routine, and the pseudocode does not show its elements ever being read. As such, it does not appear to serve any useful purpose.
It appears that this code also does not quite match:
if (!sptSet[n] && graph[u][n] && dist[u] != INT_MAX
&& dist[u]+graph[u][n] < dist[n])
dist[n] = dist[u] + graph[u][n];
The condition && dist[u] != INT_MAX does not correspond to anything in the pseudocode. (It's also unnecessary, inasmuch as u was returned by minDistance(), and therefore that condition should always be satisfied).
Conceivably, your instructor may also be dissatisfied that you print the min path lengths instead of returning them. It depends a bit on the pseudocode dialect, but I'd be inclined to take the appearance of Dist in the parameter list as an indication that it is an output parameter, not merely an internal variable.
If your instructor is being extremely picky, then perhaps you can get some slack by pointing out some apparent errors in the pseudocode:
As already mentioned, Nearest is not a parameter, and it is written to but never read from.
It looks like the conditional if Dist[u] ← w(uv) < Dist[v] then should instead be if Dist[u] + w(uv) < Dist[v] then. (You have implemented the correct version, which could be construed as another difference from the pseudocode.)
It looks like Parent[r] ← u should be Parent[v] ← u.
Of course, it could be that your instructor wanted you to implement the pseudocode exactly, errors and all ....
As a matter of strategy, I would have tried to use variable names better matching the pseudocode. I don't think it would be fair for your instructor to reject the code on those grounds, but comparing the C++ code to the pseudocode would have been easier for everyone if you had stuck a bit closer with your names.
While I'm talking about your code, by the way, I observe that although your minDistance() function appears to implement the pseudocode's requirements, it does so in an inefficient way (and Dijkstra isn't particularly efficient to begin with). The usual approach uses a min-heap to track nodes that have been seen but not yet visited, which reduces the cost of selecting the min-distance node from O(n) to O(log n). Not that it matters for so few elements as you are testing on, of course, but for large graphs the difference is enormous.
On problem is i belive your minDistance function, it seems that you only update unvisited nodes ( line 10 if (sptSet[v] == false && dist[n] <= min)). I guess this is wrong. Consider the following graph (nodes V={n1, n2, n3, n4} with the following distances d(n1, n2) = 10; d(n1,n3) = 3; d(n3,n2) = 3 ( all others are infinity).
Starting in n1 you discover n2 with the cost of 10
you discover also n3 with the cost of 3
from n2 you dont discover the shorter path to n2 ( n1-n3-n2), because you marked n2 already as visited.
I'm not sure, if i'm wright. If not dont blame me.
I have a problem where I need to divide an AABB into a number of small AABBs. I need to find the minimum and maximum points in each of the smaller AABB.
If we take this cuboid as an example, we can see that is divided into 64 smaller cuboids. I need to calculate the minimum and maximum points of all of these smaller cuboids, where the number of cuboids (64) can be specified by the end user.
I have made a basic attempt with the following code:
// Half the length of each side of the AABB.
float h = side * 0.5f;
// The length of each side of the inner AABBs.
float l = side / NUMBER_OF_PARTITIONS;
// Calculate the minimum point on the parent AABB.
Vector3 minPointAABB(
origin.getX() - h,
origin.getY() - h,
origin.getZ() - h
);
// Calculate all inner AABBs which completely fill the parent AABB.
for (int i = 0; i < NUMBER_OF_PARTITIONS; i++)
{
// This is not correct! Given a parent AABB of min (-10, 0, 0) and max (0, 10, 10) I need to
// calculate the following positions as minimum points of InnerAABB (with 8 inner AABBs).
// (-10, 0, 0), (-5, 0, 0), (-10, 5, 0), (-5, 5, 0), (-10, 0, 5), (-5, 0, 5),
// (-10, 5, 5), (-5, 5, 5)
Vector3 minInnerAABB(
minPointAABB.getX() + i * l,
minPointAABB.getY() + i * l,
minPointAABB.getZ() + i * l
);
// We can calculate the maximum point of the AABB from the minimum point
// by the summuation of each coordinate in the minimum point with the length of each side.
Vector3 maxInnerAABB(
minInnerAABB.getX() + l,
minInnerAABB.getY() + l,
minInnerAABB.getZ() + l
);
// Add the inner AABB points to a container for later use.
}
Many thanks!
I assume that your problem is that you don't get enough sub-boxes. The number of partitions refers to partitions per dimension, right? So 2 partitions yield 8 sub-boxes, 3 partitions yield 27 sub-boxes and so on.
Then you must have three nested loops, one for each dimension:
for (int k = 0; k < NUMBER_OF_PARTITIONS; k++)
for (int j = 0; j < NUMBER_OF_PARTITIONS; j++)
for (int i = 0; i < NUMBER_OF_PARTITIONS; i++)
{
Vector3 minInnerAABB(
minPointAABB.getX() + i * l,
minPointAABB.getY() + j * l,
minPointAABB.getZ() + k * l
);
Vector3 maxInnerAABB(
minInnerAABB.getX() + l,
minInnerAABB.getY() + l,
minInnerAABB.getZ() + l
);
// Add the inner AABB points to a container for later use.
}
}
}
Alternatively, you can have one huge loop over the cube of your partitios and sort out the indices by division and remainder operations inside the loop, which is a bit messy for three dimensions.
It might also be a good idea to make the code more general by calculating three independent sub-box lengths for each dimension based on the side lengths of the original box.