I found this problem somewhere in a contest and haven't been able to come up with a solution yet.
There is the N cities with coordinates (x, y). I have to go from first
city and reach the second city. There is a gas station in each city.
So I have to find minimum necessary volume of gas container to reach
the final city.
For example:
Input:
3
17 4
19 4
18 5
Output:
1.414
Here, my way is: 1->3->2
I'm using simple brute-force method, but it so slow. How can I optimize my code?
Maybe there is a better solution?
#include <iostream>
#include <algorithm>
#include <stack>
#include <math.h>
#include <cstring>
#include <iomanip>
#include <map>
#include <queue>
#include <fstream>
using namespace std;
int n, used[203];
double min_dist;
struct pc {
int x, y;
};
pc a[202];
double find_dist(pc a, pc b) {
double dist = sqrt( (a.x - b.x)*(a.x - b.x) + (a.y - b.y)*(a.y - b.y) );
return dist;
}
void functio(double d, int used[], int k, int step) {
used[k] = 1;
if(k == 1) {
if(d < min_dist) {
min_dist = d;
}
used[k] = 0;
return;
}
for(int i = 1; i < n; ++i) {
if(i != k && used[i] == 0) {
double temp = find_dist(a[k], a[i]);
if(temp > d) {
if(temp < min_dist)
functio(temp, used, i, step + 1);
}
else {
if(d < min_dist)
functio(d, used, i, step + 1);
}
}
}
used[k] = 0;
}
int main() {
cin >> n;
for(int i = 0; i < n; ++i)
cin >> a[i].x >> a[i].y;
min_dist = 1000000;
memset(used, 0, sizeof(used));
functio(0, used, 0, 0);
cout << fixed << setprecision(3) << min_dist << endl;
}
The minimum spanning tree has the neat property of encoding all of the paths between vertices that minimize the length of the longest edge on the path. For Euclidean MST, you can compute the Delaunay triangulation and then run your favorite O(m log n)-time algorithm (on a graph with m = O(n) edges) for a total running time of O(n log n). Alternatively, you could run Prim with a naive priority queue for an O(n^2)-time algorithm with a good constant (especially if you exploit SIMD).
So what you are trying to optimise in your algorithm is the longest distance you travel between two cities. Because that's how big your gas tank needs to be.
This is a variation on shortest path, because there you're trying to optimise the enire path length.
I think you could get away with this:
make a list of edges. (the distance between each pair of cities)
remove the longest edge from the list, unless this causes the destination to become unreachable.
once you cannot remove the longest path anymore, that means that this is your limiting factor in going to your destination. The rest of the route doesn't matter anymore.
Then in the end you should have a list of edges that make up a path between source and destination.
I haven't proven this solution to be optimal, so no guarantees. But consider this: if you remove the longest path, there are only shorter paths to take, so the maximum leg distance won't increase.
About the complexity, time complexity is O(n log n) because you have to sort the edges.
Memory complexity is O(n^2)
This is probably not the most efficient algorithm, because it is a graph-algorithm, and makes no use of the fact that the cities are on an euclidean plane. There is probably some optimisation there...
You can reduce time complexity to O(n^2*log(n)) using binary search which will run within the 1 second time limit. The idea behind the binary search is if we can reach city 2 from city 1 using x volume there is no need to check for higher volume container. If we cannot reach using using this then we need more than x volume. To check if we can reach city 2 using x volume you can use BFS. If two cities are within x distance of each other then its possible to move from one to the another and we can say they are connected by edge.
Code:
int vis[203];
double eps=1e-8;
struct pc {
double x, y;
};
double find_dist(pc &a, pc &b) {
double dist=sqrt((a.x - b.x)*(a.x - b.x) + (a.y - b.y)*(a.y - b.y));
return dist;
}
bool can(vector<pc> &v, double x) { // can we reach 2nd city with volume x
int n=v.size();
vector<vector<int>> graph(n, vector<int>(n, 0)); // graph in adjacency matrix form
// set edges in graph
for(int i=0; i<n; i++) {
for(int j=0; j<n; j++) {
if(i==j) continue; //same city
double d=find_dist(v[i], v[j]);
if(d<=x) graph[i][j]=1; // can reach from city i to city j using x volume
}
}
// perform BFS
memset(vis, 0, sizeof(vis));
queue<int> q;
q.push(0); // we start from city 0 (0 absed index)
vis[0]=1;
while(!q.empty()) {
int top=q.front();
q.pop();
if(top==1) return true; // can reach city 2 (1 in 0-based index)
for(int i=0; i<n; i++) {
if(top!=i && !vis[i] && graph[top][i]==1) {
q.push(i);
vis[i]=1;
}
}
}
return false; // can't reach city 2
}
double calc(vector<pc> &v) { // calculates minimum volume using binary search
double lo=0, hi=1e18;
while(abs(hi-lo)>eps) {
double mid=(lo+hi)/2;
if(can(v, mid)) {
hi=mid; // we need at most x volume
} else{
lo=mid; // we need more than x volumer
}
}
return lo;
}
Related
Given heights of n towers and a value k. We need to either increase or decrease height of every tower by k (only once) where k > 0. The task is to minimize the difference between the heights of the longest and the shortest tower after modifications, and output this difference.
I get the intuition behind the solution but I can not comment on the correctness of the solution below.
// C++ program to find the minimum possible
// difference between maximum and minimum
// elements when we have to add/subtract
// every number by k
#include <bits/stdc++.h>
using namespace std;
// Modifies the array by subtracting/adding
// k to every element such that the difference
// between maximum and minimum is minimized
int getMinDiff(int arr[], int n, int k)
{
if (n == 1)
return 0;
// Sort all elements
sort(arr, arr+n);
// Initialize result
int ans = arr[n-1] - arr[0];
// Handle corner elements
int small = arr[0] + k;
int big = arr[n-1] - k;
if (small > big)
swap(small, big);
// Traverse middle elements
for (int i = 1; i < n-1; i ++)
{
int subtract = arr[i] - k;
int add = arr[i] + k;
// If both subtraction and addition
// do not change diff
if (subtract >= small || add <= big)
continue;
// Either subtraction causes a smaller
// number or addition causes a greater
// number. Update small or big using
// greedy approach (If big - subtract
// causes smaller diff, update small
// Else update big)
if (big - subtract <= add - small)
small = subtract;
else
big = add;
}
return min(ans, big - small);
}
// Driver function to test the above function
int main()
{
int arr[] = {4, 6};
int n = sizeof(arr)/sizeof(arr[0]);
int k = 10;
cout << "\nMaximum difference is "
<< getMinDiff(arr, n, k);
return 0;
}
Can anyone help me provide the correct solution to this problem?
The codes above work, however I don't find much explanation so I'll try to add some in order to help develop intuition.
For any given tower, you have two choices, you can either increase its height or decrease it.
Now if you decide to increase its height from say Hi to Hi + K, then you can also increase the height of all shorter towers as that won't affect the maximum. Similarly, if you decide to decrease the height of a tower from Hi to Hi − K, then you can also decrease the heights of all taller towers.
We will make use of this, we have n buildings, and we'll try to make each of the building the highest and see making which building the highest gives us the least range of heights(which is our answer). Let me explain:
So what we want to do is - 1) We first sort the array(you will soon see why).
2) Then for every building from i = 0 to n-2[1] , we try to make it the highest (by adding K to the building, adding K to the buildings on its left and subtracting K from the buildings on its right).
So say we're at building Hi, we've added K to it and the buildings before it and subtracted K from the buildings after it. So the minimum height of the buildings will now be min(H0 + K, Hi+1 - K), i.e. min(1st building + K, next building on right - K).
(Note: This is because we sorted the array. Convince yourself by taking a few examples.)
Likewise, the maximum height of the buildings will be max(Hi + K, Hn-1 - K), i.e. max(current building + K, last building on right - K).
3) max - min gives you the range.
[1]Note that when i = n-1. In this case, there is no building after the current building, so we're adding K to every building, so the range will merely be
height[n-1] - height[0] since K is added to everything, so it cancels out.
Here's a Java implementation based on the idea above:
class Solution {
int getMinDiff(int[] arr, int n, int k) {
Arrays.sort(arr);
int ans = arr[n-1] - arr[0];
int smallest = arr[0] + k, largest = arr[n-1]-k;
for(int i = 0; i < n-1; i++){
int min = Math.min(smallest, arr[i+1]-k);
int max = Math.max(largest, arr[i]+k);
if (min < 0) continue;
ans = Math.min(ans, max-min);
}
return ans;
}
}
int getMinDiff(int a[], int n, int k) {
sort(a,a+n);
int i,mx,mn,ans;
ans = a[n-1]-a[0]; // this can be one possible solution
for(i=0;i<n;i++)
{
if(a[i]>=k) // since height of tower can't be -ve so taking only +ve heights
{
mn = min(a[0]+k, a[i]-k);
mx = max(a[n-1]-k, a[i-1]+k);
ans = min(ans, mx-mn);
}
}
return ans;
}
This is C++ code, it passed all the test cases.
This python code might be of some help to you. Code is self explanatory.
def getMinDiff(arr, n, k):
arr = sorted(arr)
ans = arr[-1]-arr[0] #this case occurs when either we subtract k or add k to all elements of the array
for i in range(n):
mn=min(arr[0]+k, arr[i]-k) #after sorting, arr[0] is minimum. so adding k pushes it towards maximum. We subtract k from arr[i] to get any other worse (smaller) minimum. worse means increasing the diff b/w mn and mx
mx=max(arr[n-1]-k, arr[i]+k) # after sorting, arr[n-1] is maximum. so subtracting k pushes it towards minimum. We add k to arr[i] to get any other worse (bigger) maximum. worse means increasing the diff b/w mn and mx
ans = min(ans, mx-mn)
return ans
Here's a solution:-
But before jumping on to the solution, here's some info that is required to understand it. In the best case scenario, the minimum difference would be zero. This could happen only in two cases - (1) the array contain duplicates or (2) for an element, lets say 'x', there exists another element in the array which has the value 'x + 2*k'.
The idea is pretty simple.
First we would sort the array.
Next, we will try to find either the optimum value (for which the answer would come out to be zero) or at least the closest number to the optimum value using Binary Search
Here's a Javascript implementation of the algorithm:-
function minDiffTower(arr, k) {
arr = arr.sort((a,b) => a-b);
let minDiff = Infinity;
let prev = null;
for (let i=0; i<arr.length; i++) {
let el = arr[i];
// Handling case when the array have duplicates
if (el == prev) {
minDiff = 0;
break;
}
prev = el;
let targetNum = el + 2*k; // Lets say we have an element 10. The difference would be zero when there exists an element with value 10+2*k (this is the 'optimum value' as discussed in the explaination
let closestMatchDiff = Infinity; // It's not necessary that there would exist 'targetNum' in the array, so we try to find the closest to this number using Binary Search
let lb = i+1;
let ub = arr.length-1;
while (lb<=ub) {
let mid = lb + ((ub-lb)>>1);
let currMidDiff = arr[mid] > targetNum ? arr[mid] - targetNum : targetNum - arr[mid];
closestMatchDiff = Math.min(closestMatchDiff, currMidDiff);
if (arr[mid] == targetNum) break; // in this case the answer would be simply zero, no need to proceed further
else if (arr[mid] < targetNum) lb = mid+1;
else ub = mid-1;
}
minDiff = Math.min(minDiff, closestMatchDiff);
}
return minDiff;
}
Here is the C++ code, I have continued from where you left. The code is self-explanatory.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int minDiff(int arr[], int n, int k)
{
// If the array has only one element.
if (n == 1)
{
return 0;
}
//sort all elements
sort(arr, arr + n);
//initialise result
int ans = arr[n - 1] - arr[0];
//Handle corner elements
int small = arr[0] + k;
int big = arr[n - 1] - k;
if (small > big)
{
// Swap the elements to keep the array sorted.
int temp = small;
small = big;
big = temp;
}
//traverse middle elements
for (int i = 0; i < n - 1; i++)
{
int subtract = arr[i] - k;
int add = arr[i] + k;
// If both subtraction and addition do not change the diff.
// Subtraction does not give new minimum.
// Addition does not give new maximum.
if (subtract >= small or add <= big)
{
continue;
}
// Either subtraction causes a smaller number or addition causes a greater number.
//Update small or big using greedy approach.
// if big-subtract causes smaller diff, update small Else update big
if (big - subtract <= add - small)
{
small = subtract;
}
else
{
big = add;
}
}
return min(ans, big - small);
}
int main(void)
{
int arr[] = {1, 5, 15, 10};
int n = sizeof(arr) / sizeof(arr[0]);
int k = 3;
cout << "\nMaximum difference is: " << minDiff(arr, n, k) << endl;
return 0;
}
class Solution {
public:
int getMinDiff(int arr[], int n, int k) {
sort(arr, arr+n);
int diff = arr[n-1]-arr[0];
int mine, maxe;
for(int i = 0; i < n; i++)
arr[i]+=k;
mine = arr[0];
maxe = arr[n-1]-2*k;
for(int i = n-1; i > 0; i--){
if(arr[i]-2*k < 0)
break;
mine = min(mine, arr[i]-2*k);
maxe = max(arr[i-1], arr[n-1]-2*k);
diff = min(diff, maxe-mine);
}
return diff;
}
};
class Solution:
def getMinDiff(self, arr, n, k):
# code here
arr.sort()
res = arr[-1]-arr[0]
for i in range(1, n):
if arr[i]>=k:
# at a time we can increase or decrease one number only.
# Hence assuming we decrease ith elem, we will increase i-1 th elem.
# using this we basically find which is new_min and new_max possible
# and if the difference is smaller than res, we return the same.
new_min = min(arr[0]+k, arr[i]-k)
new_max = max(arr[-1]-k, arr[i-1]+k)
res = min(res, new_max-new_min)
return res
So I was solving this USACO 2013 February Silver Contest - Perimeter - Problem 1.
Link to the problem: Problem Link
Link to the bronze version of this problem: Problem Link
Link to solutions: Silver - Link to Solution Bronze - Link to Solution
The problem:
Problem 1: Perimeter [Brian Dean, 2013]
Farmer John has arranged N hay bales (1 <= N <= 50,000) in the middle of
one of his fields. If we think of the field as a 1,000,000 x 1,000,000
grid of 1 x 1 square cells, each hay bale occupies exactly one of these
cells (no two hay bales occupy the same cell, of course).
FJ notices that his hay bales all form one large connected region, meaning
that starting from any bale, one can reach any other bale by taking a
series of steps either north, south, east, or west onto directly adjacent
bales. The connected region of hay bales may however contain "holes" --
empty regions that are completely surrounded by hay bales.
Please help FJ determine the perimeter of the region formed by his hay
bales. Note that holes do not contribute to the perimeter.
PROBLEM NAME: perimeter
INPUT FORMAT:
Line 1: The number of hay bales, N.
Lines 2..1+N: Each line contains the (x,y) location of a single hay
bale, where x and y are integers both in the range
1..1,000,000. Position (1,1) is the lower-left cell in FJ's
field, and position (1000000,1000000) is the upper-right cell.
SAMPLE INPUT (file perimeter.in):
8
10005 200003
10005 200004
10008 200004
10005 200005
10006 200003
10007 200003
10007 200004
10006 200005
INPUT DETAILS:
The connected region consisting of hay bales looks like this:
XX
X XX
XXX
OUTPUT FORMAT:
Line 1: The perimeter of the connected region of hay bales.
SAMPLE OUTPUT (file perimeter.out):
14
OUTPUT DETAILS:
The length of the perimeter of the connected region is 14 (for example, the
left side of the region contributes a length of 3 to this total). Observe
that the hole in the middle does not contribute to this number.
What I did
I went ahead with a recursive solution to the problem which goes like this :
#include <iostream>
#include <cmath>
#include <vector>
#include <algorithm>
#include <map>
using namespace std;
#define rep(i,a,b) for(auto (i)=a;i<b;i++)
#define list(i,N) for(auto (i)=0;i<N;i++)
typedef long long ll;
typedef vector<ll> vi;
typedef pair<ll,ll> pi;
#define mp make_pair
#define pb push_back
#define int ll
#define INF 1e18+5
#define mod 1000000007
//One map for storing whether a cell has hay bale or not
//And the other for visited - whether a cell has been visited or not
map<pi,bool> vis;
map<pi,bool> exists;
int ans = 0;
void solve(int i, int j){
//Check about the visited stuff
if(vis[mp(i,j)]) return;
vis[mp(i,j)] = true;
//Find the answer now
ans += 4;
if(exists[mp(i-1,j)]){
--ans; solve(i-1,j);
}
if(exists[mp(i+1,j)]){
--ans; solve(i+1,j);
}
if(exists[mp(i,j+1)]){
--ans; solve(i,j+1);
}
if(exists[mp(i,j-1)]){
--ans; solve(i,j-1);
}
}
int32_t main(){
ios::sync_with_stdio(0);
cin.tie(0); cout.tie(0);
int N; cin >> N;
int first, second; //the starting point where we start the function...
while(N--){
int a,b; cin >> a >> b;
first = a; second = b; //in the end, it is just the coordinate specified in the last in the input...
exists[mp(a,b)] = true; //Hay Bale exists...
}
solve(first,second);
cout << ans << "\n";
return 0;
}
Basically, what I am doing is :
Start at a cell.
First, check if the cell has been previously visited. If yes, return. If not, make it visited.
Add 4 to the counter for all the four sides.
Look around the cell to all its neighbouring cells. If the cell also has haybales, subtract 1 from the counter (no need for adding boundary) and then goto 2.
The problem which I am facing
Observe that this code also counts the boundary required inside the hole. BUT we don't need to include that into our answer. I, however, don't know how to exclude that from our answer...
Why I mentioned the Bronze Problem
If you see the solution of the Bronze Problem (which is just the same problem but with different constraints), Mr Brian Dean also implements this sort of a recursive solution here which is similar to what I'm doing in my code. The code is down below :
#include <stdio.h>
#define MAX_N 100
int already_visited[MAX_N+2][MAX_N+2];
int occupied[MAX_N+2][MAX_N+2];
int perimeter;
int valid(int x, int y)
{
return x>=0 && x<=MAX_N+1 && y>=0 && y<=MAX_N+1;
}
void visit(int x, int y)
{
if (occupied[x][y]) { perimeter++; return; }
if (already_visited[x][y]) return;
already_visited[x][y] = 1;
if (valid(x-1,y)) visit(x-1,y);
if (valid(x+1,y)) visit(x+1,y);
if (valid(x,y-1)) visit(x,y-1);
if (valid(x,y+1)) visit(x,y+1);
}
int main(void)
{
int N, i, x, y;
freopen ("perimeter.in", "r", stdin);
freopen ("perimeter.out", "w", stdout);
scanf ("%d", &N);
for (i=0; i<N; i++) {
scanf ("%d %d", &x, &y);
occupied[x][y] = 1;
}
visit(0,0);
printf ("%d\n", perimeter);
return 0;
}
Why this solution does not work for the Silver
This is because the constraints in the Silver version of the problem which I'm solving has higher constraints but the same time limit. This times out the code.
So, I would be grateful if anybody could help me solve this problem in order to exclude the perimeter taken up by the hole in the middle.
Your solution is quite similar to the second one posted. But instead of walking on the bales, you walk on the perimeter:
void solve(int i, int j){
if(vis[mp(i,j)]) return;
if(exists[mp(i,j)]) return;
if(there_is_no_bale_next_to(i,j)) return; // consider all 8 directions
vis[mp(i,j)] = true;
ans ++;
solve(i-1,j);
solve(i+1,j);
solve(i,j+1);
solve(i,j-1);
}
You first run solve on a point definitely on perimeter(like the westmost point).
The problem with your solution is that it targets at the 'X' points, which will inevitably count the holes as well. Please consider to launch a floodfill that moves around the object without actually getting into it. My solution below implements this idea, so holes are not being counted. Brian Dean's solution in the official editorial is also based on this idea, so you should check out that as well.
#include<bits/stdc++.h>
using namespace std;
int n, ans = 0;
map<pair<int,int>, bool> m, vis;
pair<int,int> p = {INT_MAX, INT_MAX};
bool adj (int i, int j) {
for (int x = -1; x <= 1; x++) {
for (int y = -1; y <= 1; y++) {
if (!x & !y) continue;
if (m[{i + x, j + y}]) return true;
}
}
return false;
}
int get_cnt (int i, int j) {
int res = 0;
if (m[{i, j + 1}]) res++;
if (m[{i, j - 1}]) res++;
if (m[{i + 1, j}]) res++;
if (m[{i - 1, j}]) res++;
return res;
}
void floodfill (int i, int j) {
if (m[{i, j}] || vis[{i, j}] || !adj(i, j)) return;
vis[{i, j}] = true;
ans += get_cnt(i, j);
floodfill (i, j + 1);
floodfill (i, j - 1);
floodfill (i + 1, j);
floodfill (i - 1, j);
}
int main () {
cin >> n;
for (int i = 0; i < n; i++) {
int x, y;
cin >> x >> y;
m[{x, y}] = true;
p = min(p, {x, y});
}
floodfill (p.first - 1, p.second);
cout << ans << endl;
}
Basically what I want to do is to verify if the numbers : 180 30 80 280 130 330 230 30 30 330 80 are in an arithmetic progression and to find the ratio. I tried using the following algorithm but even if I insert 35 into the row (which is not correct) the answer is 50 instead of NO.
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
int x, fr[1000]={0} ,r=0 , ok=0, i, v[100], j=0;
ifstream f("bac.in");
while(f>>x)
fr[x]++;
for(i=0; i<1000; i++)
{
if(fr[i]!=0) // I search for the first frequency and only then start to count
ok=0;
else ok=1;
if(ok==0)
r++; // ratio
else {v[j++]=r; r=0;} // if another frequency was found, ratio is reseted
}
for(i=0;i<j-1;i++) // i verify if every ratio is equal in this for
{
if(v[i]==v[i+1])
ok=1;
else ok=0;
}
if(ok==1)
cout<<++v[i];
else cout<<"NO";
f.close();
}
My idea was to find the numbers of 0's between frequencies and to count that as the ratio and put it into an array where I would verifiy if the ratios are equal. If I missed any piece of information please tell me.
It has to be done in an efficient way, so putting the numbers into an array, sorting it, deleting the doubled numbers and only after that finding the ratio (if there is any) is out of the discussion.
You can use an std::set to accomplish this. Set guarantees no duplicates and also keeps your data sorted! This means you just have to toss everything into the set and then iterate over once to check the differences.
An insertion into the set costs you O(log(n)) and we're doing n of them so that'll be O(nlog(n)). Afterwards, we loop the set to check if we have a progression or not, so that'll be another O(n) giving us complexity O(nlog(n) + n) = O(nlog(n)). Is that fast enough for you?
#include <iostream>
#include <set>
#include <iterator>
int main() {
std::set<int> values;
int x;
while (std::cin >> x) { //replace cin with your file handler
values.insert(x);
}
int difference = 0;
bool good = true;
for(auto it = values.begin(); it != values.end() && std::next(it) != values.end(); it++) {
if(!difference) difference = *it - *std::next(it);
else if(difference != *it - *std::next(it)) good = false;
}
if(good) {
std::cout << "We have a progression!" << std::endl;
} else {
std::cout << "No go on the progression." << std::endl;
}
return 0;
}
See it in action here (ideone link).
Your error is in the last loop, which should be:
bool ok = true;
for (int i = 0; i < j - 1; ++i) // i verify if every ratio is equal in this for
{
if (v[i] != v[i+1]) {
ok = false;
break;
}
}
or
ok = (j == 0 || std::all_of(v, v + j, [&](int e) { return e == v[0]; }));
Fixed code:
std::vector<int> values {180, 30, 80, 280, 130, 330, 230, 30, 30, 330, 80 };
int fr[1000]={0};
for (auto e : values)
fr[e]++;
int j = 0;
int v[100];
int r = 0;
for(int i=0; i<1000; i++)
{
if(fr[i] == 0)
r++; // ratio
else {
v[j++]=r;
r=0;
} // if another frequency was found, ratio is reseted
}
bool ok = true;
for (int i = 1; i < j - 1; ++i) // i verify if every ratio is equal in this for
{
if (v[i] != v[i+1]) {
ok = false;
break;
}
}
if(ok==1)
std::cout << v[1] + 1;
else
std::cout<<"NO";
Demo.
Here is an O(n) solution:
It iterates over the list finding the smallest, second smallest and largest elements. (It does this with three seperate iterations, but it could be one loop with a little more code.)
Denoting these x0, x1, and xn, assuming there is an arithmetic progression we can now figure out the difference (c = x1 - x0) and total number of elements (n+1 = (xn - x0)/c).
We create an array of booleans of length n+1 and start marking off the present elements. If any don't fall into this pattern we immediately return false. At the end, we check that all the elements have been marked false.
By the way, I consider #scohe001's answer to be better. It is slightly slower, but it is more obvious that it is correct and that is what is really important, unless you have tried it and found that it really is too slow. If you're only dealing with a few hundred thousand elements or less, the time difference is unlikely to be noticable, especially if you're also having to read the elements from a file, which is likely to be much slower than this stage.
#include <iostream>
#include <vector>
#include <limits>
#include <boost/dynamic_bitset.hpp>
#include <algorithm>
#include <stdlib.h>
int conditionalMin(const std::vector<int>& numbers, int lowerBound)
{
int result = std::numeric_limits<int>::max();
for (int x : numbers) {
if (x < result && x > lowerBound) {
result = x;
}
}
return result;
}
bool isArithmeticSet(const std::vector<int>& numbers)
{
// Look for smallest, second smallest, and largest numbers
int x0 = *std::min_element(numbers.begin(), numbers.end());
int x1 = conditionalMin(numbers, x0);
int xn = *std::max_element(numbers.begin(), numbers.end());
// Find which elements are present (exit early if any inappropriate elements)
int sequenceCount = (xn - x0) / (x1 - x0) + 1;
boost::dynamic_bitset<> present(sequenceCount);
for (int x : numbers) {
div_t divResult = div(x - x0, x1 - x0);
if (divResult.rem != 0 || divResult.quot < 0 || divResult.quot > sequenceCount) {
return false;
}
present[divResult.quot] = true;
}
// Are all the required elements present?
return present.all();
}
int main() {
std::vector<int> numbers = {180, 30, 80, 280, 130, 330, 230, 30, 30, 330, 80 };
std::cout << "is an arithmetic set: " << isArithmeticSet(numbers) << "\n";
return 0;
}
I am trying to find the minimal distance in the Manhattan metric (x,y). I am searching for information about this. But I haven't found anything.
#include<bits/stdc++.h>
using namespace std;
#define st first
#define nd second
pair<int, int> pointsA[1000001];
pair<int, int> pointsB[1000001];
int main() {
int n, t;
unsigned long long dist;
scanf("%d", &t);
while(t-->0) {
dist = 4000000000LL;
scanf("%d", &n);
for(int i = 0; i < n; i++) {
scanf("%d%d", &pointsA[i].st, &pointsA[i].nd);
}
for(int i = 0; i < n; i++) {
scanf("%d%d", &pointsB[i].st, &pointsB[i].nd);
}
for(int i = 0; i < n ;i++) {
for(int j = 0; j < n ; j++) {
if(abs(pointsA[i].st - pointsB[j].st) + abs(pointsA[i].nd - pointsB[j].nd) < dist) {
dist = abs(pointsA[i].st - pointsB[j].st) + abs(pointsA[i].nd - pointsB[j].nd);
}
}
printf("%lld\n", dist);
}
}
}
My code works in O(n^2) but is too slow. I do not know whether it will be useful but y in pointsA always be > 0 and y in pointsB always be < 0. My code compare actually distance to next and chose smallest.
for example:
input:
2
3
-2 2
1 3
3 1
0 -1
-1 -2
1 -2
1
1 1
-1 -1
Output:
5
4
My solution (note for simplicity I do not care about overflow in manhattan_dist and for that reason it does not work with unsigned long long):
#include <cstdlib>
#include <cstdio>
#include <cassert>
#include <vector>
#include <limits>
#include <algorithm>
typedef std::pair<int, int> Point;
typedef std::vector<std::pair<int, int> > PointsList;
static inline bool cmp_by_x(const Point &a, const Point &b)
{
if (a.first < b.first) {
return true;
} else if (a.first > b.first) {
return false;
} else {
return a.second < b.second;
}
}
static inline bool cmp_by_y(const Point &a, const Point &b)
{
if (a.second < b.second) {
return true;
} else if (a.second > b.second) {
return false;
} else {
return a.first < b.first;
}
}
static inline unsigned manhattan_dist(const Point &a, const Point &b)
{
return std::abs(a.first - b.first) +
std::abs(a.second - b.second);
}
int main()
{
unsigned int n_iter = 0;
if (scanf("%u", &n_iter) != 1) {
std::abort();
}
for (unsigned i = 0; i < n_iter; ++i) {
unsigned int N = 0;
if (scanf("%u", &N) != 1) {
std::abort();
}
if (N == 0) {
continue;
}
PointsList pointsA(N);
for (PointsList::iterator it = pointsA.begin(), endi = pointsA.end(); it != endi; ++it) {
if (scanf("%d%d", &it->first, &it->second) != 2) {
std::abort();
}
assert(it->second > 0);
}
PointsList pointsB(N);
for (PointsList::iterator it = pointsB.begin(), endi = pointsB.end(); it != endi; ++it) {
if (scanf("%d%d", &it->first, &it->second) != 2) {
std::abort();
}
assert(it->second < 0);
}
std::sort(pointsA.begin(), pointsA.end(), cmp_by_y);
std::sort(pointsB.begin(), pointsB.end(), cmp_by_y);
const PointsList::const_iterator min_a_by_y = pointsA.begin();
const PointsList::const_iterator max_b_by_y = (pointsB.rbegin() + 1).base();
assert(*max_b_by_y == pointsB.back());
unsigned dist = manhattan_dist(*min_a_by_y, *max_b_by_y);
const unsigned diff_x = std::abs(min_a_by_y->first - max_b_by_y->first);
const unsigned best_diff_y = dist - diff_x;
const int max_y_for_a = max_b_by_y->second + dist;
const int min_y_for_b = min_a_by_y->second - dist;
PointsList::iterator it;
for (it = pointsA.begin() + 1; it != pointsA.end() && it->second <= max_y_for_a; ++it) {
}
if (it != pointsA.end()) {
pointsA.erase(it, pointsA.end());
}
PointsList::reverse_iterator rit;
for (rit = pointsB.rbegin() + 1; rit != pointsB.rend() && rit->second >= min_y_for_b; ++rit) {
}
if (rit != pointsB.rend()) {
pointsB.erase(pointsB.begin(), (rit + 1).base());
}
std::sort(pointsA.begin(), pointsA.end(), cmp_by_x);
std::sort(pointsB.begin(), pointsB.end(), cmp_by_x);
for (size_t j = 0; diff_x > 0 && j < pointsA.size(); ++j) {
const Point &cur_a_point = pointsA[j];
assert(max_y_for_a >= cur_a_point.second);
const int diff_x = dist - best_diff_y;
const int min_x = cur_a_point.first - diff_x + 1;
const int max_x = cur_a_point.first + diff_x - 1;
const Point search_term = std::make_pair(max_x, std::numeric_limits<int>::min());
PointsList::const_iterator may_be_near_it = std::lower_bound(pointsB.begin(), pointsB.end(), search_term, cmp_by_x);
for (PointsList::const_reverse_iterator rit(may_be_near_it); rit != pointsB.rend() && rit->first >= min_x; ++rit) {
const unsigned cur_dist = manhattan_dist(cur_a_point, *rit);
if (cur_dist < dist) {
dist = cur_dist;
}
}
}
printf("%u\n", dist);
}
}
Benchmark on my machine (Linux + i7 2.70 GHz + gcc -Ofast -march=native):
$ make bench
time ./test1 < data.txt > test1_res
real 0m7.846s
user 0m7.820s
sys 0m0.000s
time ./test2 < data.txt > test2_res
real 0m0.605s
user 0m0.590s
sys 0m0.010s
test1 is your variant, and test2 is mine.
You'll need to learn how to write functions and how to use containers. With your current coding style, it's infeasible to get a better solution.
The problem is that the better solution is a recursive method. Sort the points by X coordinate. Now recursively split the set in half and determine the closest distance within each half as well as the closest distance between a pair of points from either half.
The last part is efficient because both halves are sorted by X. Comparing the last values from the left half with the first value of the right half gives a good upper bound on the distance.
So there's a really simple optimization you can make that can cut a ton of time off.
Since you state that all points in set A have y > 0, and all points in set B have y < 0, you can immediately discard all points in A whose y > mindist and all points in B whose y < -mindist so far. These points can never be closer than the current closest pair:
for(int i = 0; i < n ;i++) {
if (pointsA[i].nd > dist)
continue; // <- this is the big one
for(int j = 0; j < n ; j++) {
if (pointsB[j].nd < -dist)
continue; // <- helps although not as much
if(abs(pointsA[i].st - pointsB[j].st) + abs(pointsA[i].nd - pointsB[j].nd) < dist) {
dist = abs(pointsA[i].st - pointsB[j].st) + abs(pointsA[i].nd - pointsB[j].nd);
}
}
printf("%lld\n", dist);
}
For a test of 40000 points per set, on my machine with gcc and -O2 this reduces the time from 8.2 seconds down to roughly 0.01 seconds (and yields correct results)! (measured with QueryPerformanceCounter on Windows).
Not too shabby.
Fwiw, computing your distance twice isn't actually that big of a deal. First of all that "second" calculation doesn't actually happen all that often, it only happens when a closer distance is found.
And secondly, for reasons I can't explain, storing it in a variable and only calculating it once actually consistently seems to add about 20% to the total run time, raising it from an average of 8.2 sec to about 10.5 seconds for the above set.
I'd say discarding points based on your assumptions about the Y values is by far the biggest bang for your buck you can get without significantly changing your algorithm.
You may be able to take advantage of that further by pre-sorting A and B in order of increasing Y values for A, and decreasing Y values for B, before finding the distances, to maximize the chance of skipping over subsequent point sets.
Keep a list of candidates in group A and group B, initially containing the whole input. Take min y of A and max y B for the closest pair in y, calculate the Manhattan distance, and eliminate any with y greater than the upper bound from the candidate lists. This might slash the input or it might have essentially no effect, but it's O(N) and a cheap first step.
Now sort the remaining candidates in x and y. This gives you a separated list in y ,and a mixed list in x, and is O(N log N), where N has been cut down, hopefully but not necessarily, by step one.
For each point, now calculate its closest neighbour in y (trivial) and closest in x (a bit harder), then calculate its minimum possible Manhattan distance, assuming the closest in x is also the closest in y. Eliminate any points further than your bound from the candidate list. Now sort again, by minimum possible. That's another N log N operation.
Now start with your best candidate and find its genuine minimum distance, by trying the closest point in either direction in x or y, and terminating when either delta x or delta y goes above the best so far, or delta x or delta y goes above your maximum bound. If you have better candidate pair then the current candidate pair, purge the candidate list of everything with a worse minimum possible. If the best candidate point doesn't form half of a candidate pair, you just purge that one point.
When you've purged a certain number of candidates, recalculate the lists. I'm not sure what the best value to use would be, certainly if you get to the worst candidate you must do that and then start again at the best. Maybe use 50%.
Eventually you are left with only one candidate pair. I'm not quite sure what the analysis is - worst case I suppose you only eliminate a few candidates on each test. But for most inputs you should get the candidate list down to a small value pretty fast.
I'm trying to solve a task, with restraints: \$1 \le B,L,S \le 100 000\$. And for this I use a BFS from every edge at the bottom of the graph, and run a BFS till we reach y=0. However, when running the code in the compiler, I get a timeout error. Why do I get a TLE error and what do I change in this code to pass?
#include <iostream>
#include <algorithm>
#include <vector>
#include <queue>
using namespace std;
int bfs(const vector< vector<int> > &g, pair<int, int> p)
{
queue <pair<pair<int, int>, int> > que;
vector< vector<bool> > vis(100000,vector<bool>(100000,false)); //visited
int x, y, k = 0; //k = distance
pair <pair<int, int>, int> next, start;
pair <int, int> pos;
start = make_pair(make_pair(p.first, p.second), 0);
que.push(start);
while(!que.empty())
{
next = que.front();
pos = next.first;
x = pos.first;
y = pos.second;
k = next.second;
que.pop();
if (y == 0) {
return k;
}
if((g[x+1][y] == 1) && (vis[x+1][y] == false))
{
que.push(make_pair(make_pair(x+1, y), k+1));
vis[x+1][y] = true;
}
if((g[x][y+1] == 1) && (vis[x][y+1] == false))
{
que.push(make_pair(make_pair(x, y+1), k+1));
vis[x][y+1] = true;
}
if((g[x-1][y] == 1) && (vis[x-1][y] == false))
{
que.push(make_pair(make_pair(x-1, y), k+1));
vis[x-1][y] = true;
}
if((g[x][y-1] == 1) && (vis[x][y-1] == false))
{
que.push(make_pair(make_pair(x, y-1), k+1));
vis[x][y-1] = true;
}
}
}
int main()
{
int B,L,S,x,y, shortestDist = 1234567;
cin >> B >> L >> S;
vector< pair <int, int> > p; //stones in the first row
vector< vector<int> > g(B, vector<int>(L,0));
for(int i = 0; i < S; i++)
{
cin >> y >> x;
g[y][x] = 1; // stone = 1, empty = 0
if(y == B-1)
p.push_back(make_pair(x, y));
}
for(int i=0;i<p.size();++i)
{
shortestDist = min(shortestDist,bfs(g,p[i]));
}
cout << shortestDist + 2 << "\n"; //add 2 because we need to jump from shore to river at start, and stone to river at end
return 0;
}
There are two problems with your approach resulting in a complexity of O(B*(B*L+S)).
The first problem is, that you run bfs B times in the worst case if the whole first row is full of stones. You have S stones and every stone has at most 4 neighbours, thus every call of bfs would run in O(S), but you do it B times, thus your algorithm will need for some cases about O(B*S) operations - I'm sure the author of the problem took care that programs with this running time will time out (after all that are at least 10^10 operations).
A possible solution for this problem is to start bfs with all stones of the first row already in the queue. Having multiple starting points can also be reached by adding a new vertex to the graph and connecting it to the stones in the first row. The second approach is not that easy for your implementation, because of the data structures you are using.
And this (data structure) is your second problem: you have S=10^5 elements/vertices/stones but use B* L=10^10 memory units for it. It is around 2G memory! I don't know what are the memory limits for this problem - it is just to much! Initializing it B times costs you B* B*L operations in overall.
A better way is to use a sparse data structure like an adjacency list. But beware of filling out this data structure in O(S^2) - use a set for O(SlogS) or even unordered_set for O(S) running times.