Elements from vector change to random values - c++

I am trying to solve a problem on infoarena.ro (site similar to codeforces.com, but it's in Romanian) and for some reason, some elements from a set just change to random values. Relevant code:
#include <fstream>
#include <vector>
#include <algorithm>
using namespace std;
ofstream out("test.out");
ifstream in("test.in");
struct Edge
{
int from, to, color, index;
bool erased = false, visited = false;
};
struct Event;
int point(const Event* event);
struct Event
{
int time;
bool add;
Edge *edge;
bool operator < (const Event other) const
{
return (this->time < other.time) ||
(this->time == other.time && this->add < other.add) ||
(this->time == other.time && this->add < other.add &&
point(this)>point(&other));
}
};
int point(const Event* event)
{
if(event->edge->from == event->time)
return event->edge->to;
else
return event->edge->from;
}
vector<Edge> edges;
vector<Event> events;
int main()
{
int N, M;
in >> N >> M;
for(int i = 0; i < M; i++)
{
int x, y;
in >> x >> y;
if(x > y)
swap(x, y);
Edge e = {x, y, i, i};
edges.push_back(e);
events.push_back(Event{x, true, &edges.back()});
Edge debug = *events.back().edge;
events.push_back(Event{y, false, &edges.back()});
debug = *events.back().edge;
}
sort(events.begin(), events.end());
for(Event event : events)
out << event.edge->from << " " << event.edge->to << "\n";
return 0;
}
I excluded code I wrote that is not relevant to the question.
Input:
5 6
1 2
2 5
1 4
3 1
4 3
5 3
First line is N (number of vertices) and M (number of edges). Next lines are all the edges.
Output:
44935712 44896968
1 4
1 3
44935712 44896968
3 1941924608
1 3
3 4
3 5
1 4
3 4
3 1941924608
3 5
I am trying to make a "journal" like my teacher called it. For each edge (x,y) I want to add it in a stack at stage x, and erase it at stage y (along with all other elements in the stack until I reach (x, y)). I want to sort by the "time" when I make those operations (hence "time" value in the Event struct). "Add" indicates if this is an event of adding an edge or removing it from the stack.
I am outputting the edges in the "events" vector for debugging purposes and I noticed that the values change to random stuff. Can someone explain why is this happening?

The problem is here
events.push_back(Event{x, true, &edges.back()});
and here
events.push_back(Event{y, false, &edges.back()});
As you push structures into the edges vector, the vector will reallocate the memory needed to store the contained structures. If such a relocation happens then all iterators and pointers to elements in the vector become invalid.
A simple solution is to store pointers in the edges vector instead, and copy the pointers for the Event structures. Another possible solution is to do two passes. One to create the edges vector, and then a separate pass (loop) to create the events vector.

Related

Painting rows and columns of a 2D matrix

Problem: We have given a nxm matrix, initially initialized with 0. We have to perform k queries:
Each query supports one of the two operations.
paint all elements in ri row with the colour ai .
paint all elements in ci column with the colour ai.
The same element can be painted more than once. But the color of that element is the same as the last painted colour for that element. You have to print the final matrix after painting.
Input: The first line contains three space-separated integers N,M,K
Next K lines consist of exactly one typep of operation to be performed
1) 1 ri ai means row ri is painted with color ai
2) 2 ci ai means column ci is painted with color ai
Output: Print the final matrix of size nxm after painting.
Sample Input: 3 3 3
1 1 3 2 2 1 1 2 2 Output: 3 1 3 2 2 2 0 1 0
I have written the following code to solve it but it is showing TLE for some test cases. Can you give me some idea how to solve it in efficient way?
My Code
#include<bits/stdc++.h>
#define ll long long int
using namespace std;
int mat[5000][5000];
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
int n,m,k,q,r,c,val,i,j,re;
cin>>n>>m>>re;
while(re--)
{
cin>>q;
if(q==1)
{
cin>>r;
cin>>val;
i=r-1;
for(j=0,k=m-1;j<=k;j++,k--)
{
mat[i][j]=val;
mat[i][k]=val;
}
}
else if(q==2)
{
cin>>c>>val;
j=c-1;
for(i=0,k=n-1;i<=k;i++,k--)
{
mat[i][j]=val;
mat[k][j]=val;
}
}
}
for(i=0;i<n;i++)
{
for(j=0;j<m;j++)
{
cout<<mat[i][j]<<" ";
}
cout<<endl;
}
}
It is only needed to memorize the last color that was affected to a given row or a given column, and the last time at which it was performed.
Then, for a given element mat[i][j], we simply have to check if the last modification on row i occured before of after the last modification on column j.
We don't even need to set such a matrix.
#include <iostream>
#include <ios>
#include <vector>
int main() {
std::ios_base::sync_with_stdio(false);
std::cin.tie(0);
int n, m, re;
std::cin >> n >> m >> re;
std::vector<int> row_color (n, 0), row_date (n, -1);
std::vector<int> col_color (m, 0), col_date (m, -1);
int date = 0;
while (re--) {
int q, index, val;
std::cin >> q >> index >> val;
index--;
if (q == 1) {
row_color[index] = val;
row_date[index] = date;
} else {
col_color[index] = val;
col_date[index] = date;
}
++date;
}
for (int i = 0; i < n; ++i) {
for (int j = 0; j < m; ++j) {
int val = (row_date[i] > col_date[j]) ? row_color[i] : col_color[j];
std::cout << val << " ";
}
std::cout << "\n";
}
}
Instead of performing all the paint operations as they come in, you could:
While parsing the input, keep and update:
For each row the last color ai it is supposed to be painted in and the corresponding value k (running from 0 to K)
The same for each column
Setup an array of paint operations that combines both row and column paintings for all rows, columns where a painting occured
Sort the array based on k
Perform these operations on a matrix initialized with 0
This algorithm has an advantage if there is a large k (so, lots of overpainting) which you could expect from these kind of problems.

Parallelizing Boruvka with openMP

I have implemented Boruvka's algorithm sequentially in C++ and I know one of the advantages of the algorithm is that it can easily be paralleled. I am trying to do this using openMP, but I can't figure out how to get it to work. I read in an adjacency list from graph.txt and print my output of the minimum spanning tree into mst.txt. Here is my sequential code for boruvka:
#include <iostream>
#include <fstream>
#include <sstream>
using namespace std;
// initialize data structure for edges (given in adjacency list)
struct Edge {
int v1, v2, weight; // 2 connecting verticies and a weight
};
// initialize structure for the graph
struct Graph {
int vertex, edge;
Edge* e; // undirected graph so edge from v1 to v2 is same as v2 to v1
};
// Creates a graph for #verticies and #edges using arrays
struct Graph* formGraph(int vertex, int edge)
{
Graph* graph = new Graph;
graph->vertex = vertex;
graph->edge = edge;
graph->e = new Edge[edge]; // again, v1-v2 = v2-v1
return graph;
}
// initialize structure for subsets within the graph
struct Subset {
int parent, rank; // rank will act as counter
};
// will help to find lightest edge of sets recursively
int find(struct Subset subset[], int i)
{
if (subset[i].parent != i) {
subset[i].parent = find(subset, subset[i].parent);
}
// once it is =1
return subset[i].parent;
}
// A function that does union of two sets
void Union(struct Subset subs[], int set1, int set2)
{
int root1 = find(subs, set1);
int root2 = find(subs, set2);
//union by ranking
if (subs[root1].rank < subs[root2].rank) { // if rank2 is higher thats parent
subs[root1].parent = root2;
}
else if (subs[root1].rank > subs[root2].rank) { // if rank1 is higher thats parent
subs[root2].parent = root1;
}
else // ranks are the equal so increment rank by 1
{
subs[root2].parent = root1;
subs[root1].rank++;
}
}
// the boruvka algorithm implementation
void boruvka(struct Graph* graph) {
// set data of initial graph
int vertex = graph->vertex;
int edge = graph->edge;
Edge* e = graph->e;
//initially there will always be as many subsets as there are vertices
struct Subset *subs = new Subset[vertex];
int *lightest = new int[vertex]; // array storing least weight edge
// subset for each vertex
for (int v = 0; v < vertex; v++)
{
subs[v].parent = v; // initial parent (none)
subs[v].rank = 0; // initial rank (no parent so always 0)
lightest[v] = -1; // start from -1
}
int components = vertex; // iniitial trees = number of verticies
int minWeight = 0;
// must keep going until there is only one tree
while (components > 1)
{
// lightest weight for all edges
for (int i=0; i<edge; i++)
{
// gets subsets for edges that could connect
int set1 = find(subs, e[i].v1);
int set2 = find(subs, e[i].v2);
// waste of time if they're already in same set so don't check
if (set1 == set2)
continue;
// if different then check which one is lightest
else
{
if (lightest[set1] == -1 || e[lightest[set1]].weight > e[i].weight) {
lightest[set1] = i;
}
if (lightest[set2] == -1 || e[lightest[set2]].weight > e[i].weight) {
lightest[set2] = i;
}
}
}
// making sure the wieghts are added
for (int i=0; i<vertex; i++)
{
// make sure all lightest edges are included
if (lightest[i] != -1)
{
int s1 = find(subs, e[lightest[i]].v1);
int s2 = find(subs, e[lightest[i]].v2);
if (s1 == s2)
continue;
minWeight += e[lightest[i]].weight;
// Need to sort output lexicographically!?!?!?!?!!
printf("Edge %d-%d included in MST with weight %d\n", // prints verices and weight of edge
e[lightest[i]].v1, e[lightest[i]].v2,
e[lightest[i]].weight);
// union subsets together, decrease component number
Union(subs, s1, s2);
components--;
}
lightest[i] = -1; // in case after first iteration lightest edges fall in same subset
}
}
printf("Weight of MST is %d\n", minWeight);
return;
}
// main function for calling boruvka
int main() {
ifstream infile;
char inputFileName[] = "graph.txt"; // input filename here
infile.open(inputFileName, ios::in);
string line;
getline(infile, line);
int V = atoi(line.c_str()); // set num of vertices to first line of txt
getline(infile, line);
int E = atoi(line.c_str()); // set num of edges to second line of txt
// create graph for boruvka
struct Graph* graph = formGraph(V, E);
if (infile.is_open()) {
string data[3]; // initialize data array
int count = 0; // initialize counter
while (infile.good()) { // same as while not end of file
getline(infile, line);
stringstream ssin(line);
int i = 0;
while (ssin.good() && i < 3) {
ssin >> data[i];
i++;
}
graph->e[count].v1 = atoi(data[0].c_str());
graph->e[count].v2 = atoi(data[1].c_str());
graph->e[count].weight = atoi(data[2].c_str());
count++;
}
}
freopen("mst.txt","w",stdout); // writes output into mst.txt
// call boruvka function
boruvka(graph);
infile.close(); // close the input file
return 0;
}
An example of my graph.txt is this:
9
14
0 1 4
7 8 7
1 2 8
1 7 11
2 3 7
2 5 4
2 8 2
3 4 9
3 5 14
4 5 10
5 6 2
6 7 1
6 8 6
0 7 8
The output for this example which is correct that is placed in my mst.txt is this:
Edge 0-1 included in MST with weight 4
Edge 2-8 included in MST with weight 2
Edge 2-3 included in MST with weight 7
Edge 3-4 included in MST with weight 9
Edge 5-6 included in MST with weight 2
Edge 6-7 included in MST with weight 1
Edge 1-2 included in MST with weight 8
Edge 2-5 included in MST with weight 4
Weight of MST is 37
According to the algorithm, in each iteration, each tree in the forest will have one and only one edge added to the forest independently (edges from different trees could be the same), until the added edges connect the whole forest into a single tree.
Here you can see finding the only edge for each tree can be done in parallel. As long as you have more than one tree, you could use multiple threads to speed up the searching.
if you're interested, I've written an implementation of the parallel Boruvka's algorithm using OpenMP.
We store the graph as an edge list (edges) where each edge (u, v) appears twice: as an edge from u and from v. At each step of the algorithm, edges is sorted in O(E log E) = O(E log V) time.
Then edges are split between P processors. Each one of them calculates the array of shortest edges from its local nodes. Because allocating raw memory for all nodes is done in constant time, we can simply store this as an array and avoid using hashmaps. Then we merge the results between processors into a global shortest edge array using compare and swap. Note that because we sorted the edge list previously, all edges from u make up a continuous segment in edges. Because of this, the total number of extra iterations in the cas loop does not exceed O(P) which gives us O(E / P + P) = O(E / P) time for this step.
After that, we can merge components along the added edges in O(V * alpha(V) / P) time using a parallel DSU algorithm.
The next step is updating the list of vertices and edges, this can be done using parallel cumulative sum in O(V / P) and O(E / P) respectively.
Since the total number of iterations is O(log V), the overall time complexity is O(E log^2 V / P).

Running BFS to find shortest path from bottom to top of a graph

I'm trying to solve a task, with restraints: \$1 \le B,L,S \le 100 000\$. And for this I use a BFS from every edge at the bottom of the graph, and run a BFS till we reach y=0. However, when running the code in the compiler, I get a timeout error. Why do I get a TLE error and what do I change in this code to pass?
#include <iostream>
#include <algorithm>
#include <vector>
#include <queue>
using namespace std;
int bfs(const vector< vector<int> > &g, pair<int, int> p)
{
queue <pair<pair<int, int>, int> > que;
vector< vector<bool> > vis(100000,vector<bool>(100000,false)); //visited
int x, y, k = 0; //k = distance
pair <pair<int, int>, int> next, start;
pair <int, int> pos;
start = make_pair(make_pair(p.first, p.second), 0);
que.push(start);
while(!que.empty())
{
next = que.front();
pos = next.first;
x = pos.first;
y = pos.second;
k = next.second;
que.pop();
if (y == 0) {
return k;
}
if((g[x+1][y] == 1) && (vis[x+1][y] == false))
{
que.push(make_pair(make_pair(x+1, y), k+1));
vis[x+1][y] = true;
}
if((g[x][y+1] == 1) && (vis[x][y+1] == false))
{
que.push(make_pair(make_pair(x, y+1), k+1));
vis[x][y+1] = true;
}
if((g[x-1][y] == 1) && (vis[x-1][y] == false))
{
que.push(make_pair(make_pair(x-1, y), k+1));
vis[x-1][y] = true;
}
if((g[x][y-1] == 1) && (vis[x][y-1] == false))
{
que.push(make_pair(make_pair(x, y-1), k+1));
vis[x][y-1] = true;
}
}
}
int main()
{
int B,L,S,x,y, shortestDist = 1234567;
cin >> B >> L >> S;
vector< pair <int, int> > p; //stones in the first row
vector< vector<int> > g(B, vector<int>(L,0));
for(int i = 0; i < S; i++)
{
cin >> y >> x;
g[y][x] = 1; // stone = 1, empty = 0
if(y == B-1)
p.push_back(make_pair(x, y));
}
for(int i=0;i<p.size();++i)
{
shortestDist = min(shortestDist,bfs(g,p[i]));
}
cout << shortestDist + 2 << "\n"; //add 2 because we need to jump from shore to river at start, and stone to river at end
return 0;
}
There are two problems with your approach resulting in a complexity of O(B*(B*L+S)).
The first problem is, that you run bfs B times in the worst case if the whole first row is full of stones. You have S stones and every stone has at most 4 neighbours, thus every call of bfs would run in O(S), but you do it B times, thus your algorithm will need for some cases about O(B*S) operations - I'm sure the author of the problem took care that programs with this running time will time out (after all that are at least 10^10 operations).
A possible solution for this problem is to start bfs with all stones of the first row already in the queue. Having multiple starting points can also be reached by adding a new vertex to the graph and connecting it to the stones in the first row. The second approach is not that easy for your implementation, because of the data structures you are using.
And this (data structure) is your second problem: you have S=10^5 elements/vertices/stones but use B* L=10^10 memory units for it. It is around 2G memory! I don't know what are the memory limits for this problem - it is just to much! Initializing it B times costs you B* B*L operations in overall.
A better way is to use a sparse data structure like an adjacency list. But beware of filling out this data structure in O(S^2) - use a set for O(SlogS) or even unordered_set for O(S) running times.

How to detect cycle in edge list representation of graph?

I have written a graph structure as an edge list, and am trying to write Kruskal's MST algorithm for it.
Here's my code so far:
#include <bits/stdc++.h>
using namespace std;
struct _ { ios_base::Init i; _() { cin.sync_with_stdio(0); cin.tie(0); } } _;
//////////////////////////////////////////////////////////////////////////////////
#define endl '\n'
#define ll long long
#define pb push_back
#define mt make_tuple
#define in(a) for (auto& i: a)
//////////////////////////////////////////////////////////////////////////////////
#define edge tuple < ll, ll, ll >
bool func (edge a, edge b) { return get<2>(a) < get<2>(b); }
struct graph
{
ll v;
vector <edge> edgelist;
void addedge (edge x) { edgelist.pb(x); }
ll find (vector <ll> parent, ll i)
{ return parent[i]==-1 ? i : find (parent, parent[i]); }
bool cycle()
{
vector <ll> parent (v);
fill (parent.begin(), parent.end(), -1);
in (edgelist)
{
ll x = find (parent, get<0>(i));
ll y = find (parent, get<1>(i));
if (x==y) return true;
else parent[x]=y;
}
return false;
}
graph mst()
{
sort (edgelist.begin(), edgelist.end(), func);
graph tree;
in(edgelist)
{
graph temp = tree;
temp.addedge(i);
if (!temp.cycle()) tree=temp;
}
return tree;
}
};
int main()
{
graph g;
cin >> g.v;
ll e;
cin >> e;
for (ll i=1; i<=e; i++)
{
ll a, b, w;
cin >> a >> b >> w;
g.addedge(mt(a, b, w));
}
graph mstree = g.mst();
in(mstree.edgelist) cout << get<0>(i) << " " << get<1>(i) << " " << get<2>(i) << endl;
cout << endl;
}
/*
Sample Input
4 5
0 1 10
0 2 6
0 3 5
1 3 15
2 3 4
Sample Output
2 3 4
0 3 5
0 1 10
*/
My code takes a very long time to produce the output. Are there any problems in my implementation? Also, if I loop this task for multiple graphs, my program crashes in the middle of execution.
My code takes a very long time to produce the output. Are there any
problems in my implementation?
There are several problems:
First,
ll find (vector <ll> parent, ll i)
{ return parent[i]==-1 ? i : find (parent, parent[i]); }
You pass the parent by value, this means copying all the array. Pass by reference (and non-const, as you will need to modify it, see point 3).
Second, in cycle() you do not need to check all edges, you need to check only the edge that is under consideration in the main loop (in mst()). (And do not set parent in cycle(), you need to use the same array in all mst().)
Third, read on "enhancements" of disjoin-set structure (even the Wikipedia article has it all explained), namely union by rank and path compression. Only with those you will achieve expected performance from the disjoin-set.
Also, if I loop this task for multiple graphs, my program crashes in
the middle of execution.
It is impossible to tell not knowing how to you loop and what are the multiple graphs inputs you are using. However, I strongly suspect stack overflow on even medium-sized graphs as you pass parent by value, thus quickly consuming the stack.

Shortest path in a grid between two points. With a catch

I have this problem where I have to find the shortest path in an NxM grid from point A (always top left) to point B (always bottom right) by only moving right or down. Sounds easy, eh? Well here's the catch: I can only move the number shown on the tile I'm sitting on at the moment. Let me illustrate:
2 5 1 2
9 2 5 3
3 3 1 1
4 8 2 7
In this 4x4 grid the shortest path would take 3 steps, walking from top left 2 nodes down to 3, and from there 3 nodes right to 1, and then 1 node down to the goal.
[2] 5 1 2
9 2 5 3
[3] 3 1 [1]
4 8 2 [7]
If not for the shortest path, I could also be taking this route:
[2] 5 [1][2]
9 2 5 3
3 3 1 [1]
4 8 2 [7]
That would unfortunately take a whopping 4 steps, and thus, is not in my interest.
That should clear things out a bit. Now about the input.
The user inputs the grid as follows:
5 4 // height and width
2 5 2 2 //
2 2 7 3 // the
3 1 2 2 // grid
4 8 2 7 //
1 1 1 1 //
Homework
I have thought this through, but cannot come to a better solution than to simplify the inputted grid into an unweighed (or negative-weight) graph and run something like dijkstra or A* (or something along those lines) on it. Well... this is the part where I get lost. I implemented something to begin with (or something to throw to thrash right away). It's got nothing to do with dijkstra or A* or anything; just straight-forward breadth-first search.
The Code
#include <iostream>
#include <vector>
struct Point;
typedef std::vector<int> vector_1D;
typedef std::vector< std::vector<int> > vector_2D;
typedef std::vector<Point> vector_point;
struct Point {
int y, x;
vector_point Parents;
Point(int yPos = 0, int xPos = 0) : y(yPos), x(xPos) { }
void operator << (const Point& point) { this->Parents.push_back(point); }
};
struct grid_t {
int height, width;
vector_2D tiles;
grid_t() // construct the grid
{
std::cin >> height >> width; // input grid height & width
tiles.resize(height, vector_1D(width, 0)); // initialize grid tiles
for(int i = 0; i < height; i++) //
for(int j = 0; j < width; j++) // input each tile one at a time
std::cin >> tiles[i][j]; // by looping through the grid
}
};
void go_find_it(grid_t &grid)
{
vector_point openList, closedList;
Point previous_node; // the point is initialized as (y = 0, x = 0) if not told otherwise
openList.push_back(previous_node); // (0, 0) is the first point we want to consult, of course
do
{
closedList.push_back(openList.back()); // the tile we are at is good and checked. mark it so.
openList.pop_back(); // we don't need this guy no more
int y = closedList.back().y; // now we'll actually
int x = closedList.back().x; // move to the new point
int jump = grid.tiles[y][x]; // 'jump' is the number shown on the tile we're standing on.
if(y + jump < grid.height) // if we're not going out of bounds
{
openList.push_back(Point(y+jump, x)); //
openList.back() << Point(y, x); // push in the point we're at right now, since it's the parent node
}
if(x + jump < grid.width) // if we're not going out of bounds
{
openList.push_back(Point(y, x+jump)); // push in the new promising point
openList.back() << Point(y, x); // push in the point we're at right now, since it's the parent node
}
}
while(openList.size() > 0); // when there are no new tiles to check, break out and return
}
int main()
{
grid_t grid; // initialize grid
go_find_it(grid); // basically a brute-force get-it-all-algorithm
return 0;
}
I should probably also point out that the running time cannot exceed 1 second, and the maximum grid height and width is 1000. All of the tiles are also numbers from 1 to 1000.
Thanks.
Edited Code
#include <iostream>
#include <vector>
struct Point;
typedef std::vector<int> vector_1D;
typedef std::vector< std::vector<int> > vector_2D;
typedef std::vector<Point> vector_point;
struct Point {
int y, x, depth;
vector_point Parents;
Point(int yPos = 0, int xPos = 0, int dDepth = 0) : y(yPos), x(xPos), depth(dDepth) { }
void operator << (const Point& point) { this->Parents.push_back(point); }
};
struct grid_t {
int height, width;
vector_2D tiles;
grid_t() // construct the grid
{
std::cin >> height >> width; // input grid height & width
tiles.resize(height, vector_1D(width, 0)); // initialize grid tiles
for(int i = 0; i < height; i++) //
for(int j = 0; j < width; j++) // input each tile one at a time
std::cin >> tiles[i][j]; // by looping through the grid
}
};
int go_find_it(grid_t &grid)
{
vector_point openList, closedList;
Point previous_node(0, 0, 0); // the point is initialized as (y = 0, x = 0, depth = 0) if not told otherwise
openList.push_back(previous_node); // (0, 0) is the first point we want to consult, of course
int min_path = 1000000;
do
{
closedList.push_back(openList[0]); // the tile we are at is good and checked. mark it so.
openList.erase(openList.begin()); // we don't need this guy no more
int y = closedList.back().y; // now we'll actually move to the new point
int x = closedList.back().x; //
int depth = closedList.back().depth; // the new depth
if(y == grid.height-1 && x == grid.width-1) return depth; // the first path is the shortest one. return it
int jump = grid.tiles[y][x]; // 'jump' is the number shown on the tile we're standing on.
if(y + jump < grid.height) // if we're not going out of bounds
{
openList.push_back(Point(y+jump, x, depth+1)); //
openList.back() << Point(y, x); // push in the point we're at right now, since it's the parent node
}
if(x + jump < grid.width) // if we're not going out of bounds
{
openList.push_back(Point(y, x+jump, depth+1)); // push in the new promising point
openList.back() << Point(y, x); // push in the point we're at right now, since it's the parent node
}
}
while(openList.size() > 0); // when there are no new tiles to check, break out and return false
return 0;
}
int main()
{
grid_t grid; // initialize grid
int min_path = go_find_it(grid); // basically a brute-force get-it-all-algorithm
std::cout << min_path << std::endl;
//system("pause");
return 0;
}
The program now prints the correct answer. Now I have to optimize (run time is way too big). Any hints on this one? Optimizing is the one thing I suck at.
The Answer
In the end the solution appeared to consist of little code. The less the better, as I like it. Thanks to Dejan Jovanović for the beautiful solution
#include <iostream>
#include <vector>
#include <algorithm>
struct grid_t {
int height, width;
std::vector< std::vector<int> > tiles;
std::vector< std::vector<int> > distance;
grid_t() // construct the grid
{
std::cin >> height >> width; // input grid height & width
tiles.resize(height, std::vector<int>(width, 0)); // initialize grid tiles
distance.resize(height, std::vector<int>(width, 1000000)); // initialize grid tiles
for(int i = 0; i < height; i++) //
for(int j = 0; j < width; j++) // input each tile one at a time
std::cin >> tiles[i][j]; // by looping through the grid
}
};
int main()
{
grid_t grid; // initialize grid
grid.distance[0][0] = 0;
for(int i = 0; i < grid.height; i++) {
for(int j = 0; j < grid.width; j++) {
if(grid.distance[i][j] < 1000000) {
int d = grid.tiles[i][j];
if (i + d < grid.height) {
grid.distance[i+d][j] = std::min(grid.distance[i][j] + 1, grid.distance[i+d][j]);
}
if (j + d < grid.width) {
grid.distance[i][j+d] = std::min(grid.distance[i][j] + 1, grid.distance[i][j+d]);
}
}
}
}
if(grid.distance[grid.height-1][grid.width-1] == 1000000) grid.distance[grid.height-1][grid.width-1] = 0;
std::cout << grid.distance[grid.height-1][grid.width-1] << std::endl;
//system("pause");
return 0;
}
There is need to construct the graph, this can easily be solved with dynamic programming using one scan over the matrix.
You can set the distance matrix D[i,j] to +inf at the start, with D[0,0] = 0. While traversing the matrix you just do
if (D[i,j] < +inf) {
int d = a[i, j];
if (i + d < M) {
D[i + d, j] = min(D[i,j] + 1, D[i + d, j]);
}
if (j + d < N) {
D[i, j + d] = min(D[i,j] + 1, D[i, j + d]);
}
}
The final minimal distance is in D[M -1, N-1]. If you wish to reconstruct the path you can keep a separate matrix that marks where the shortest path came from.
You're overthinking it. :) Run a Breadth-First Search. The solution space is a binary tree, where each node branches into "right" or "down". From current point, generate the down point and right point, stuff their coordinates into a queue, repeat until at finish.
Without checking, something like this:
queue = [{ x: 0, y: 0, path: [] }] # seed queue with starting point
p = nil
do
raise NoSolutionException if p.empty? # solution space exhausted
p = queue.pop # get next state from the back of the queue
break if p.x == MAX_X - 1 && p.y == MAX_Y - 1 # we found final state
l = grid[p.x][p.y] # leap length
# add right state to the front of the queue
queue.unshift({x: p.x + l, y: p.y, path: p.path + [p] }) if p.x + l <= MAX_X
# add down state to the front of the queue
queue.unshift({x: p.x, y: p.y + l, path: p.path + [p] }) if p.y + l <= MAX_Y
end
puts p.path
Uglifying into C++ left as exercise for the reader :p
Build an unweighted directed graph:
There are NxM vertices. In what follows, vertex v corresponds to grid square v.
There is an arc from vertex u to v iff you can jump from grid square u to square v in a single move.
Now apply a shortest path algorithm from the top-right vertex to the bottom-left.
Finally, observe that you don't actually need to build the graph. You can simply implement the shortest path algoritm in terms of the original grid.
Start off with a brute force approach to get it to work, then optimize from there. The brute force is straight-forward: run it recursively. Take your two moves, recurse on those, and so on. Collect all the valid answers and retain the minimum. If the run time is too long, then you can optimize by a variety of means. For instance, some of the moves may be invalid (because they exceed a dimension of the grid) and can be eliminated, and so on. Keep optimizing until a worst case input runs at the desired speed.
Having said that, the performance requirements only make sense if you are using the same system and inputs, and even then there are some caveats. Big O notation is a much better way of analyzing the performance, plus it can point you to an algorithm and eliminate the need for profiling.