I was trying to save some space by using a hashmap to represent a graph instead of adjacency matrix, I ran the same snippet using adjacency matrix, and everything worked fine, But as soon as I changed the data structure to a hashmap, it ran into in infinite loop, The infinite loop is because of the bsf function defined which returns a boolean value and more specifically the error is in line : if ((!visited[v]) && (rGraph[make_pair(u, v)] > 0)) somehow this if condition is not working fine while I represent the rGraph as a hashmap.
I would also like to know if using a hashmap to represent the graph is a preferred way ?
Here is the attached code:
bool bfs(map<pair<int, int>, int> rGraph, int s, int t, int parent[])
{
// Create a visited array and mark all vertices as not visited
bool visited[V];
memset(visited, 0, sizeof(visited));
// Create a queue, enqueue source vertex and mark source vertex
// as visited
queue <int> q;
q.push(s);
visited[s] = true;
parent[s] = -1;
// Standard BFS Loop
while (!q.empty())
{
int u = q.front();
q.pop();
for (int v=0; v<V; v++)
{
cout << "Value of graph at: " <<u << " , " << v << " : " << rGraph[make_pair(u, v)] << "\n";
//cout << "!visited[v] : " << (!visited[v]) << "rGraph[u][v] : " << rGraph[make_pair(u, v)] << "\n";
cout << "if condition : " << ((!visited[v]) && (rGraph[make_pair(u, v)] > 0)) << "\n";
if ((!visited[v]) && (rGraph[make_pair(u, v)] > 0))
{
q.push(v);
parent[v] = u;
visited[v] = true;
}
}
}
// If we reached sink in BFS starting from source, then return
// true, else false
return (visited[t] == true);
}
// Returns tne maximum flow from s to t in the given graph
int fordFulkerson(map<pair<int, int> , int> graph , int s, int t)
{
int u, v;
// Create a residual graph and fill the residual graph with
// given capacities in the original graph as residual capacities
// in residual graph
map<pair<int, int>, int>rGraph; // Residual graph where rGraph[i][j] indicates
// residual capacity of edge from i to j (if there
// is an edge. If rGraph[i][j] is 0, then there is not)
for (u = 0; u < V; u++){
for (v = 0; v < V; v++){
rGraph[make_pair(u, v)] = graph[make_pair(u, v)];
}
}
int parent[V]; // This array is filled by BFS and to store path
int max_flow = 0; // There is no flow initially
// Augment the flow while tere is path from source to sink
while (bfs(rGraph, s, t, parent))
{
// Find minimum residual capacity of the edhes along the
// path filled by BFS. Or we can say find the maximum flow
// through the path found.
int path_flow = INT_MAX;
for (v=t; v!=s; v=parent[v])
{
u = parent[v];
path_flow = min(path_flow, int(rGraph[make_pair(u, v)]));
}
// update residual capacities of the edges and reverse edges
// along the path
for (v=t; v != s; v=parent[v])
{
u = parent[v];
rGraph[make_pair(u, v)] -= path_flow;
rGraph[make_pair(u, v)] += path_flow;
}
// Add path flow to overall flow
max_flow += path_flow;
}
// Return the overall flow
return max_flow;
}
int main(){
map< pair<int, int>, int > graph;
graph[make_pair(0, 1)] = 16;
graph[make_pair(0, 2)] = 13;
graph[make_pair(1, 2)] = 10;
graph[make_pair(1, 3)] = 12;
graph[make_pair(2, 1)] = 4;
graph[make_pair(2, 4)] = 14;
graph[make_pair(3, 2)] = 9;
graph[make_pair(3, 5)] = 20;
graph[make_pair(4, 3)] = 7;
graph[make_pair(4, 5)] = 4;*/
cout << "The maximum possible flow is " << fordFulkerson(graph, 0, 5) << "\n";
return 0;
}
And the adjacency matrix looks like :
int graph[V][V] = { {0, 16, 13, 0, 0, 0},
{0, 0, 10, 12, 0, 0},
{0, 4, 0, 0, 14, 0},
{0, 0, 9, 0, 0, 20},
{0, 0, 0, 7, 0, 4},
{0, 0, 0, 0, 0, 0}};
First, just by looking at your code - you are not using hashmap - you are using map (read: red-black tree in most implementations). Equivalent of "hashmap" would be unordered_map. However, if you want to save memory - you have chosen the right container (unordered_map may consume more memory than map - unordered_map (hashmap) requires continuous region of memory for buckets: and of course all buckets are never occupied).
And now to problems:
When you do rGraph[make_pair(u, v)] you are potentially creating a new element in your map. Indexing operator returns (see cppreference):
reference to an existing element pointed by the index make_pair(u, v)
if the element pointed by make_pair(u, v) does not exist - it creates a new element under that index and returns you the reference to that new element.
If you want to check whether an element exists in the map / unordered_map you have to use the find method:
auto p = make_pair(u, v)];
auto iter = rGraph.find(p);
if(iter != rGraph.end())
{//element 'rGraph[p]' exists
}
else
{//element 'rGraph[p]' does not exist
}
You can also combine (potentially) inserting new element with checking whether the new element was actually created - this is usually more efficient than two separate insert and find (see cppreference):
auto p = make_pair(u, v)];
auto res = rGraph.insert(make_pair(p,1)); //insert value '1'
if(res.second)
{//new element was inserted
}
else
{//element already existed
}
//here res.first is an iterator pointing to the element rGraph[p] - newly inserted or not
You should use the count or find methods to check existence of items in the map, instead of the operator [] because it constructs a new item if it doesn't exist. So change
rGraph[make_pair(u, v)]>0
with
rGraph.count(make_pair(u, v))>0
Also, I might suggest passing any large object (such as the map) by reference. Also, as mentioned here, you can use "unordered_map" which is a hash table, instead of "map" which is a tree, since you don't need the map to be ordered.
Related
I'm trying to write a program whose input is an array of integers, and its size. This code has to delete each element which is smaller than the element to the left. We want to find number of times that we can process the array this way, until we can no longer delete any more elements.
The contents of the array after we return are unimportant - only the return value is of interest.
For example: given the array [10, 9, 7, 8, 6, 5, 3, 4, 2, 1], the function should return 2, because:
[10,9,7,8,6,5,3,4,2,1] → [10,8,4] → [10]
For example: given the array [1,2,3,4], the function should return 0, because
No element is larger than the element to its right
I want each element to remove the right element if it is more than its right element. We get a smaller array. Then we repeat this operation again. Until we get to an array in which no element can delete another element. I want to calculate the number of steps performed.
int Mafia(int n, vector <int> input_array)
{
int ptr = n;
int last_ptr = n;
int night_Count = 0;
do
{
last_ptr = ptr;
ptr = 1;
for (int i = 1; i < last_ptr; i++)
{
if (input_array[i] >= input_array[i - 1])
{
input_array[ptr++] = input_array[i];
}
}
night_Count++;
} while (last_ptr > ptr);
return night_Count - 1;
}
My code works but I want it to be faster.
Do you have any idea to make this code faster, or another way that is faster than this?
Here is a O(NlogN) solution.
The idea is to iterate over the array and keep tracking candidateKillers which could kill unvisited numbers. Then we find the killer for the current number by using binary search and update the maximum iterations if needed.
Since we iterate over the array which has N numbers and apply log(N) binary search for each number, the overall time complexity is O(NlogN).
Alogrithm
If the current number is greater or equal than the number before it, it could be a killer for numbers after it.
For each killer, we keep tracking its index idx, the number of it num and the iterations needed to reach that killer iters.
The numbers in the candidateKillers by its nature are non-increasing (see next point). Therefore we can apply binary search to find the killer of the current number, which is the one that is a) the closest to the current number b) greater than the current number. This is implemented in searchKiller.
If the current number will be killed by a number in candidateKillers with killerPos, then all candidate killers after killerPos are outdated, because those outdated killers will be killed before the numbers after the current number reach them. If the current number is greater than all candidateKillers, then all the candidateKillers can be discarded.
When we find the killer of the current number, we increase the iters of the killer by one. Because from now on, one more iteration is needed to reach that killer where the current number need to be killed first.
class Solution {
public:
int countIterations(vector<int>& array) {
if (array.size() <= 1) {
return 0;
}
int ans = 0;
vector<Killer> candidateKillers = {Killer(0, array[0], 1)};
for (auto i = 1; i < array.size(); i++) {
int curNum = array[i];
int killerPos = searchKiller(candidateKillers, curNum);
if (killerPos == -1) {
// current one is the largest so far and all candidateKillers before are outdated
candidateKillers = {Killer(i, curNum, 1)};
continue;
}
// get rid of outdated killers
int popCount = candidateKillers.size() - 1 - killerPos;
for (auto j = 0; j < popCount; j++) {
candidateKillers.pop_back();
}
Killer killer = candidateKillers[killerPos];
ans = max(killer.iters, ans);
if (curNum < array[i-1]) {
// since the killer of the current one may not even be in the list e.g., if current is 4 in [6,5,4]
if (killer.idx == i - 1) {
candidateKillers[killerPos].iters += 1;
}
} else {
candidateKillers[killerPos].iters += 1;
candidateKillers.push_back(Killer(i, curNum, 1));
}
}
return ans;
}
private:
struct Killer {
Killer(int idx, int num, int iters)
: idx(idx), num(num), iters(iters) {};
int idx;
int num;
int iters;
};
int searchKiller(vector<Killer>& candidateKillers, int n) {
int lo = 0;
int hi = candidateKillers.size() - 1;
if (candidateKillers[0].num < n) {
return -1;
}
int ans = -1;
while (lo <= hi) {
int mid = lo + (hi - lo) / 2;
if (candidateKillers[mid].num > n) {
ans = mid;
lo = mid + 1;
} else {
hi = mid - 1;
}
}
return ans;
}
};
int main() {
vector<int> array1 = {10, 9, 7, 8, 6, 5, 3, 4, 2, 1};
vector<int> array2 = {1, 2, 3, 4};
vector<int> array3 = {4, 2, 1, 2, 3, 3};
cout << Solution().countIterations(array1) << endl; // 2
cout << Solution().countIterations(array2) << endl; // 0
cout << Solution().countIterations(array3) << endl; // 4
}
You can iterate in reverse, keeping two iterators or indices and moving elements in place. You don't need to allocate a new vector or even resize existing vector. Also a minor, but can replace recursion with loop or write the code the way compiler likely to do it.
This approach is still O(n^2) worst case but it would be faster in run time.
I have an array of pairs that represent a range of [begin,end). The array can be assumed to already sorted by the 'begin' field.
I want to generate a new array with all of the overlaps removed, and additional pairs created, as needed.
For example, let's say the array contained the following pairs:
[1,3],[2,5],[7,15],[8,9],[12,19]
The output should be as follows:
[1,2],[2,3],[3,5],[7,8],[8,9],[9,12],[12,15],[15,19]
Ultimately, the output array should contain no overlaps at all.
What's the most optimal solution that takes no more than O(m), where m is the number of entries needed in the output array? I think I see a way to do it in O(n^2), where n is the number of entries in the input array, but there's got to be a better way.
The final implementation will be in C++11, using vectors of pairs of doubles, although pseudocode solutions are fine.
EDIT:
I appreciate all responses, but I would politely request in advance to please not post any solutions that depend on particular frameworks or libraries unless such frameworks are part of standard c++11.
First I'll solve a related problem; generate merged intervals that cover the same area with no adjacency or overlap.
Walk the input array. Start with the first element. Record highwater (end of interval) and lowater (start of interval).
Proceed forward. Each element, if it overlaps the interval, extend highwater. If not, output highwater and lowater as an interval, then record a new high and lowater.
This takes O(n) time on input.
Every element of input must be read, because any of them could go from their start location to the end and change the result. So this is O-optimal.
This merges intervals into the largest contiguous one you can make; you want to save all of the "edges" or "seams" in the original intervals. To solve your spec, simply keep track of seams (in order) and break the generated intervals at those seams. "Lowater" seams will always come with increasing values; highwater seams may not. So an ordered set of seams should work. This is O(nlgn) sadly due to the set.
// half open
struct interval {
int lowater = 0;
int highwater = 0;
bool empty() const {
return lowater == highwater;
}
friend std::ostream& operator<<( std::ostream& os, interval i ) {
return os << "[" << i.lowater << "," << i.highwater << "]";
}
};
template<class Range, class Out>
void build_intervals( Range&& in, Out out ) {
std::optional<interval> current;
std::set<int> seams;
auto dump_interval = [&](interval i){
if (i.empty()) return;
*out = i;
};
auto dump_current = [&]{
if (!current) return;
// std::cout << "Dumping " << *current << " with seams: {";
for (int seam:seams) {
// std::cout << seam << ",";
dump_interval({ current->lowater, seam });
current->lowater = seam;
}
// std::cout << "}\n";
dump_interval( *current );
current = std::nullopt;
seams.clear();
};
for (auto&& e : in) {
if (current && e.lowater <= current->highwater) {
seams.insert(e.lowater);
seams.insert(e.highwater);
// std::cout << "No gap between " << *current << " and " << e << "\n";
current->highwater = (std::max)(e.highwater, current->highwater);
// std::cout << "Combined: " << *current << "\n";
continue;
}
if (!current) {
// std::cout << "New current " << e << "\n";
} else {
// std::cout << "Gap between " << *current << " and " << e << "\n";
dump_current();
}
current = e;
seams.insert(e.lowater);
seams.insert(e.highwater);
}
dump_current();
}
live example.
I came up with something like this, by adding just couple of if it is done in O(n) time. I'm just not sure about last elements, my output:
[1 : 2], [2 : 3], [3 : 5], [7 : 8], [8 : 9], [9 : 12], [12 : 15], [15 : 19]
Maybe its something that would help:
std::vector<std::pair<int, int>> noOverlaps(std::vector<std::pair<int, int>>& input) {
if (input.size() <= 1) {
return input;
}
std::vector<std::pair<int, int>> result;
result.push_back(input[0]);
for (int i = 1; i < input.size(); ++i) {
//If overlap
if (input[i].first < result.back().second) {
auto lastOne = result.back();
result.pop_back();
result.push_back(std::make_pair(lastOne.first, input[i].first));
if (lastOne.second > input[i].second) {
result.push_back(std::make_pair(input[i].first, input[i].second));
result.push_back(std::make_pair(input[i].second, lastOne.second));
} else {
result.push_back(std::make_pair(input[i].first, lastOne.second));
result.push_back(std::make_pair(lastOne.second, input[i].second));
}
} else {
result.push_back(input[i]);
}
}
return result;
}
Update 1
As pointed out in the comment above will not work with multiple overlapping intervals, so the above solution can be improved by swallowing intervals that are containing each other and run the same algorithm:
std::vector<std::pair<int, int>> noOverlaps(std::vector<std::pair<int, int>>& origInput) {
if (origInput.size() <= 1) {
return origInput;
}
std::vector<std::pair<int, int>> result;
std::vector<std::pair<int, int>> input;
input.push_back(origInput[0]);
for (int i = 1; i < origInput.size(); ++i) {
if (input[i-1].first <= origInput[i].first && input[i-1].second >= origInput[i].second) {
continue;
}
input.push_back(origInput[i]);
}
result.push_back(input[0]);
for (int i = 1; i < input.size(); ++i) {
//If overlap
if (input[i].first < result.back().second) {
auto lastOne = result.back();
result.pop_back();
result.push_back(std::make_pair(lastOne.first, input[i].first));
if (lastOne.second > input[i].second) {
result.push_back(std::make_pair(input[i].first, input[i].second));
result.push_back(std::make_pair(input[i].second, lastOne.second));
} else {
result.push_back(std::make_pair(input[i].first, lastOne.second));
result.push_back(std::make_pair(lastOne.second, input[i].second));
}
} else {
result.push_back(input[i]);
}
}
return result;
}
But this requires 2xO(n) space complexity and code is not nice.
So I just wonder would that not be enough:
std::vector<std::pair<int, int>> noOverlaps2(std::vector<std::pair<int, int>>& origInput) {
if (origInput.size() <= 1) {
return origInput;
}
int low = origInput[0].first, high = origInput[0].second;
std::vector<std::pair<int, int>> result;
for (int i = 1; i < origInput.size(); ++i) {
if (high < origInput[i].first) {
result.emplace_back(low, high);
low = origInput[i].first;
high = origInput[i].second;
} else {
high = std::max(origInput[i].second, high);
}
}
result.emplace_back(low, high);
return result;
}
For your data it gives output:[1 : 5], [7 : 19] but it get rid of overlaps.
How do I erase an element one by one in a vector? I want to check the vector using some conditions after a specific element is removed.
I tried it in this way, but it doesn't work. What's wrong? v is already initialized
long long max = maxSubArraySum(v);
long long t = 0;
for(long long i = 0; i < n ; i++){
std::vector<long long> cv;
cv = v;
//cout << "i = " << i << "v = " <<v[i] << '\n';
cv.erase(find(cv.begin(),cv.end(),v[i])); // <—- wrong
// EDIT
// cv.erase(cv.begin()+i); <—- fix.
t = maxSubArraySum(cv);
//cout << "t = " << t << '\n';
if(t > max){
max = t;
//cout << max << '\n';
}
}
// cout << max << '\n';
}
}
For example, v = {1, -2 , 3, -2 ,5 },
I remove first 1, then maxSubArraySum will be for cv = {-2,3,-2,5 } which is 6 for this subarray {3,-2,5}.
Next I remove -2, then maxSubArraySum will be for cv = {1, 3,-2,5} which is 6 for this subarray {3,-2,5}
Next I remove 3, then maxSubArraySum will be for cv = {1, -2,-2,5} which is -2 for this subarray {-2,5}
Next I remove -2, then maxSubArraySum will be for cv = {1, -2, 3, 5} which is 8 for this subarray {3,5}
Next I remove 5, then maxSubArraySum will be for cv = {1, -2,3,-2} which is 4 for this subarray {1,-2,3}
How do I code it in C++?
EDIT :
I got the answer.
My code is was slightly off as it was deleting the first element find found. In case of duplicates, this showed the error.
So I changed it to delete indexes only.
Thank you.
You don't need to use find. Try this:
cv.erase(cv.begin()+i);
This will find the element at the ith position and delete it, using pointer arithmetic.
Declare the vector cv before the for loop (outside) and your problem will be solved. To simplify, use the v directly instead of making a copy
while (!v.empty()) {
v.erase(v.begin());
cout << endl;
long long t = maxSubArraySum(v);
if (t > max) {
max = t;
// cout << max << endl;
}
}
I'm trying to figure out the following problem.
Suppose I have the following container in C++:
std::set<std::pair<int, int> > my_container;
This set (dictionary) is sorted with respect to the order < on std::pair<int, int>, which is the lexicographic order. My task is to find any element in my_container that has the first coordinate equal to, say x, and return the iterator to it. Obviously, I don't want to use find_if, because I need to solve this in logarithmic time.
I would appreciate any advice on how this can be done
You can use lower_bound for this:
auto it = my_container.lower_bound(std::make_pair(x, std::numeric_limits<int>::min());
This will give you an iterator to the first element e for which e < std::pair(x, -LIMIT) does not hold.
Such an element either has its first component > x (in which case there's no x in the set), or has the first component equal to x and is the first such. (Note that all second components are greater than or equal to std::numeric_limits<int>::min() by definition).
You could use std::set::lower_bound to get the lower and upper limits of the range like this:
#include <set>
#include <iostream>
// for readability
typedef std::set<std::pair<int, int> > int_set;
void print_results(const int_set& s, int i)
{
// first element not less than {i, 0}
int_set::const_iterator lower = s.lower_bound(std::make_pair(i, 0));
// first element not less than {i + 1, 0}
int_set::const_iterator upper = s.lower_bound(std::make_pair(i + 1, 0));
for(int_set::const_iterator iter = lower; iter != upper; ++iter)
std::cout << iter->first << ", " << iter->second << '\n';
}
int main()
{
int_set s;
s.insert(std::make_pair(2, 0));
s.insert(std::make_pair(1, 9));
s.insert(std::make_pair(2, 1));
s.insert(std::make_pair(3, 0));
s.insert(std::make_pair(7, 6));
s.insert(std::make_pair(5, 5));
s.insert(std::make_pair(2, 2));
s.insert(std::make_pair(4, 3));
print_results(s, 2);
}
Output:
2, 0
2, 1
2, 2
I have a network of stations in a subway system. The number of stations, the number of tickets I can travel between stations with, and which stations are connected to each other are given in a text file as input to the program. Which stations are connected to each other are kept in a 2D boolean matrix. I have to find the number of paths from station 0 and back to 0 that uses all of the tickets.
Here is one of the examples:
In that example, there are 7 stations and 5 tickets.
Starting and returning to 0, there are 6 paths:
0-1-2-3-4-0
0-1-5-3-4-0
0-1-6-3-4-0
0-4-3-6-1-0
0-4-3-5-1-0
0-4-3-2-1-0
I currently have a recursive solution to this that runs in O(N^k) (N represents the number of stations while k is the number of tickets), but I have to convert it to an iterative, dynamic programming solution in O(k*N^2) that works on any input.
#include <algorithm>
#include <fstream>
#include <iostream>
#include <map>
#include <vector>
using namespace std;
// We will represent our subway as a graph using
// an adjacency matrix to indicate which stations are
// adjacent to which other stations.
struct Subway {
bool** connected;
int nStations;
Subway (int N);
private:
// No copying allowed
Subway (const Subway&) {}
void operator= (const Subway&) {}
};
Subway::Subway(int N)
{
nStations = N;
connected = new bool*[N];
for (int i = 0; i < N; ++i)
{
connected[i] = new bool[N];
fill_n (connected[i], N, false);
}
}
unsigned long long int callCounter = 0;
void report (int dest, int k)
{
++callCounter;
// Uncomment the following statement if you want to get a feel
// for how many times the same subproblems get revisited
// during the recursive solution.
cerr << callCounter << ": (" << dest << "," << k << ")" << endl;
}
/**
* Count the number of ways we can go from station 0 to station destination
* traversing exactly nSteps edges.
*/
unsigned long long int tripCounter (const Subway& subway, int destination, int nSteps)
{
report (destination, nSteps);
if (nSteps == 1)
{
// Base case: We can do this in 1 step if destination is
// directly connected to 0.
if (subway.connected[0][destination]){
return 1;
}
else{
return 0;
}
}
else
{
// General case: We can get to destinaiton in nSteps steps if
// we can get to station S in (nSteps-1) steps and if S connects
// to destination.
unsigned long long int totalTrips = 0;
for (int S = 0; S < subway.nStations; ++S)
{
if (subway.connected[S][destination])
{
// Recursive call
totalTrips += tripCounter (subway, S, nSteps-1);
}
}
return totalTrips;
}
}
// Read the subway description and
// print the number of possible trips.
void solve (istream& input)
{
int N, k;
input >> N >> k;
Subway subway(N);
int station1, station2;
while (input >> station1)
{
input >> station2;
subway.connected[station1][station2] = true;
subway.connected[station2][station1] = true;
}
cout << tripCounter(subway, 0, k) << endl;
// For illustrative/debugging purposes
cerr << "Recursive calls: " << callCounter << endl;
}
int main (int argc, char** argv)
{
if (argc > 1)
{
ifstream in (argv[1]);
solve (in);
}
else
{
solve (cin);
}
return 0;
}
I'm not looking for a solution. I am currently out of ideas and hoping someone can point me in the right direction. Since I'm required to implement a bottom-up approach for this, how would I start with developing a dynamic programming table using the smallest sub-problems?
You should construct an array T that for each step T[i] tells "how many paths are there between 0 and i".
For 0 steps, this array is:
[1, 0, 0, ... 0]
Then, for each step, do:
T_new[i] = sum{0<=j<n}(T[j] if there is an edge (i, j))
After k of those steps, T[0] will be the answer.
Here's a simple Python implementation to illustrate:
def solve(G, k):
n = len(G)
T = [0]*n
T[0] = 1
for i in xrange(k):
T_new = [
sum(T[j] for j in xrange(n) if G[i][j])
for i in xrange(n)
]
T = T_new
return T[0]
G = [
[0, 1, 0, 0, 1, 0, 0],
[1, 0, 1, 0, 0, 1, 1],
[0, 1, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 1, 1, 1],
[1, 0, 0, 1, 0, 0, 0],
[0, 1, 0, 1, 0, 0, 0],
[0, 1, 0, 1, 0, 0, 0]
]
print solve(G, 5) #6
Dynamic programming works by recursively storing the previous subproblem result. In your case the subproblems consist of finding the number of all paths that, given a number of tickets k, can reach a station.
In the base case you have 0 tickets and thus the only station you can reach is station 0 with no paths. To kickstart the algorithm we assume that the null path is also a valid path.
At this point I would recommend you to get a piece of paper and try it out yourself first. The recursion you need is something like
set base case (i.e. station 0 == 1 null path)
for each ticket in [1;k]
stations = get the stations which were reached at the previous step
for each station in stations
spread the number of paths they were reached with to the neighbors
return the number of paths for station 0 with k tickets
The complete DP algorithm, minimizing the number of changes needed to integrate it into your code, follows
/**
* Count the number of ways we can go from station 0 to station destination
* traversing exactly nSteps edges with dynamic programming. The algorithm
* runs in O(k*N^2) where k is the number of tickets and N the number of
* stations.
*/
unsigned int tripCounter(const Subway& subway, int destination, int nSteps)
{
map<int, vector<int>> m;
for (int i = 0; i < nSteps + 1; ++i)
m[i].resize(subway.nStations, 0);
m[0][0] = 1; // Base case
for (int t = 1; t < m.size(); ++t) { // For each ticket
vector<int> reachedStations;
for (int s = 0; s < subway.nStations; ++s) { // For each station
if (m[t-1][s] > 0)
reachedStations.push_back(s); // Store if it was reached in the previous state
}
for (auto s : reachedStations) {
// Find adjacent stations
for (int adj = 0; adj < subway.nStations; ++adj) {
if (s == adj)
continue;
if (subway.connected[s][adj])
m[t][adj] += m[t-1][s];
}
}
}
return m[nSteps][0];
}
Complexity is as asked.
Make sure you understand the code before using it.
As you will learn iterating over subproblems is a common pattern in dynamic programming algorithms.
I suggest you consider the sub-problem:
DP[i][a] = number of paths from 0 to a using exactly i tickets
This is initialized with DP[0][0] = 1, and DP[0][a!=0] = 0.
You can get an update formula by considering all paths to a node:
DP[i][a] = sum DP[i-1][b] for all neighbours b of a
There are kN sub-problems, each taking O(N) to compute, so the total complexity is O(kN^2).
The final answer is given by DP[k][0].