3-opt optimization code for TSP - c++

I have this code (tsp problem) that works for the 2-opt optimization and i'd like to change it for a 3-opt optimization. I know i must add a for, but don't really understand the range of the third for. Can you help me?
double bestDecrement = tsp.infinite;
// intial and final position are fixed (initial/final node remains 0)
for ( uint a = 1 ; a < currSol.sequence.size() - 2 ; a++ )
{
int h = currSol.sequence[a-1];
int i = currSol.sequence[a];
for ( uint b = a + 1 ; b < currSol.sequence.size() - 1 ; b++ )
{
int j = currSol.sequence[b];
int l = currSol.sequence[b+1];
double neighDecrement = - tsp.cost[h][i] - tsp.cost[j][l] + tsp.cost[h][j] + tsp.cost[i][l] ;
if ( neighDecrement < bestDecrement )
{
bestDecrement = neighDecrement;
move.from = a;
move.to = b;
}
}
}

Basically you are looking for 3 edges to remove and then reinsert. So for example:
for ( uint a = 1 ; a < currSol.sequence.size() - 3 ; a++ )
...
for ( uint b = a + 1 ; b < currSol.sequence.size() - 2 ; b++ )
...
for ( unit c = b + 1 ; c < currSol.sequence.size() - 1 ; c++)
...
The trickier part is determining the new costs, since there are a few feasible reinsertions (as opposed to just one in 2-opt).

Related

Algorithm that finds n-th number coprime with a given one (inclusion - exclusion principle)

I've bumped into a problem that asks me to find the n-th number that is coprime with a given one (p).
The restrictions are:
1 ≤ n ≤ 12 000 000 000
1 ≤ p ≤ 10^14
Time limit: 0,05 seconds
Now, the expected solution makes use of the inclusion and exclusion principle. Others that have solved this problem have a function that I can't figure out what it does, but I know it's supposed to be the one that makes use of this principle. If someone can explain the function, that'd be great.
Here's a full source from someone that has solved this problem:
#include <cmath>
#include <vector>
#include <bitset>
#include <climits>
#include <fstream>
#include <iostream>
using namespace std;
const int NMAX = 11e4 ;
bitset < NMAX + 5 > c ;
vector < int > primes ;
inline void sieve ( long long n ) {
long long i, j, lim = ( long long ) sqrt ( ( double ) n ) ;
c [ 0 ] = c [ 1 ] = 1 ;
for ( i = 4 ; i <= n ; i += 2 )
c [ i ] = 1 ;
for ( i = 3 ; i <= lim ; i += 2 )
if ( ! c [ i ] )
for ( j = i * i ; j <= n ; j += i * 2 )
c [ j ] = 1 ;
primes.reserve ( 1e4 ) ;
primes.push_back ( 2 ) ;
for ( i = 3 ; i <= n ; i += 2 )
if ( ! c [ i ] )
primes.push_back ( i ) ;
}
long long factors [ 25 ], nr, med ;
inline void gen_fact ( long long x ) {
long long d = 0, ok ;
nr = 0 ;
while ( primes [ d ] * primes [ d ] <= x && d < primes.size () ) {
ok = 0 ;
while ( x % primes [ d ] == 0 )
x /= primes [ d ], ok = 1 ;
if ( ok )
factors [ ++ nr ] = primes [ d ] ;
++ d ;
}
if (x != 1)
factors [++nr] = x;
}
inline long long numbers(long long x) {
long long p, sign, i, j, cnt, ans = x ;
for ( i = 1 ; i < ( 1 << nr ) ; ++ i ) {
sign = 1 ;
p = 1 ;
cnt = 0 ;
for ( j = 0 ; j < nr ; ++ j )
if ( ( 1 << j ) & i ) {
p *= factors [ j + 1 ] ;
++ cnt ;
}
if ( cnt % 2 )
sign = - 1 ;
ans += sign * x / p ;
}
return ans ;
}
int main() {
ifstream in ( "frac.in" ) ;
ofstream out ( "frac.out" ) ;
long long n, p, st, dr, last, val ;
sieve( NMAX ) ;
in >> n >> p ;
gen_fact ( n ) ;
st = 0 ;
dr = LLONG_MAX ;
while ( st <= dr ) {
med = ( st + dr ) >> 1 ;
val = numbers ( med ) ;
if ( val >= p )
dr = med - 1, last = med ;
else
st = med + 1 ;
}
out << last ;
}
The function in question is numbers().
As for the other functions, I understand them. The only explanation I need is how this function makes use of the inclusion - exclusion principle.
In combinatorics, the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finite sets; symbolically expressed as
Wikipedia MathJax render of expression.
The full identity: Wikipedia MathJax render of the identity
As for what I recognize in the function that uses the principle, the line ans += sign * x / p is the only thing I recognize.
What I understand from this principle is that, a way to find the elements of a reunion of sets, is to add all the elements of every single set into one big set, then from that big set, subtract the elements that 2 arbitrary sets from the ones given have in common, then add the elements that 3 arbitrary sets have in common, and so on. This is because the first time we added up all the elements, we also added up elements that might also be in at least another set that we have, and in a reunion, we only want them appearing once.

Am I interpreting this pseudocode wrong?

I've got this pseudocode:
COMPARE-EXCHANGE(A,i,j)
if A[i] > A[j]
exchange A[i] with A[j]
INSERTION-SORT(A)
for j = 2 to A.length
for i = j-1 downto 1
COMPARE-EXCHANGE(A,i,i+1)
I would interpret it as:
void insertSort( )
{
int tmp;
for( int j = 2 ; j < MAX ; ++j )
{
for( int i = j - 1 ; i > 0 ; --i )
{
if( unsortedArr[i] > unsortedArr[i + 1] )
{
tmp = unsortedArr[i];
unsortedArr[i] = unsortedArr[i + 1];
unsortedArr[i + 1] = tmp;
}
}
}
}
However that would skip unsortedArr[0].
Which means it won't work.
Changing the 2nd for to:
for( int i = j - 1 ; i >= 0 ; --i )
Will make it run as intended.
Is there a mistake in the pseudocode or was my first try of interpreting it wrong?
However that would skip unsortedArr[0]. Which means it won't work.
It is nearly universal for pseudocode to number array elements from 1, not from zero, as in C/C++
Changing the 2nd for to:
for( int i = j - 1 ; i >= 0 ; --i )
Will make it run as intended.
That is not enough: you also need to start j at 1 rather than 2 in the outer loop.
Also note that the C++ standard library offers a std::swap function which takes care of exchanging the elements of the array for you:
if( unsortedArr[i] > unsortedArr[i + 1] )
{
std::swap(unsortedArr[i], unsortedArr[i+1]);
}
I think your pseudo code is assuming that arrays start with index one [1] -- where in C & C++ they start at zero [0].
I'm guessing that the pseudocode is using 1-based indexing, rather than the 0-based indexing that C++ uses.

Most efficient way to calculate lexicographic index

Can anybody find any potentially more efficient algorithms for accomplishing the following task?:
For any given permutation of the integers 0 thru 7, return the index which describes the permutation lexicographically (indexed from 0, not 1).
For example,
The array 0 1 2 3 4 5 6 7 should return an index of 0.
The array 0 1 2 3 4 5 7 6 should return an index of 1.
The array 0 1 2 3 4 6 5 7 should return an index of 2.
The array 1 0 2 3 4 5 6 7 should return an index of 5039 (that's 7!-1 or factorial(7)-1).
The array 7 6 5 4 3 2 1 0 should return an index of 40319 (that's 8!-1). This is the maximum possible return value.
My current code looks like this:
int lexic_ix(int* A){
int value = 0;
for(int i=0 ; i<7 ; i++){
int x = A[i];
for(int j=0 ; j<i ; j++)
if(A[j]<A[i]) x--;
value += x*factorial(7-i); // actual unrolled version doesn't have a function call
}
return value;
}
I'm wondering if there's any way I can reduce the number of operations by removing that inner loop, or if I can reduce conditional branching in any way (other than unrolling - my current code is actually an unrolled version of the above), or if there are any clever bitwise hacks or filthy C tricks to help.
I already tried replacing
if(A[j]<A[i]) x--;
with
x -= (A[j]<A[i]);
and I also tried
x = A[j]<A[i] ? x-1 : x;
Both replacements actually led to worse performance.
And before anyone says it - YES this is a huge performance bottleneck: currently about 61% of the program's runtime is spent in this function, and NO, I don't want to have a table of precomputed values.
Aside from those, any suggestions are welcome.
Don't know if this helps but here's an other solution :
int lexic_ix(int* A, int n){ //n = last index = number of digits - 1
int value = 0;
int x = 0;
for(int i=0 ; i<n ; i++){
int diff = (A[i] - x); //pb1
if(diff > 0)
{
for(int j=0 ; j<i ; j++)//pb2
{
if(A[j]<A[i] && A[j] > x)
{
if(A[j]==x+1)
{
x++;
}
diff--;
}
}
value += diff;
}
else
{
x++;
}
value *= n - i;
}
return value;
}
I couldn't get rid of the inner loop, so complexity is o(n log(n)) in worst case, but o(n) in best case, versus your solution which is o(n log(n)) in all cases.
Alternatively, you can replace the inner loop by the following to remove some worst cases at the expense of another verification in the inner loop :
int j=0;
while(diff>1 && j<i)
{
if(A[j]<A[i])
{
if(A[j]==x+1)
{
x++;
}
diff--;
}
j++;
}
Explanation :
(or rather "How I ended with that code", I think it is not that different from yours but it can make you have ideas, maybe)
(for less confusion I used characters instead and digit and only four characters)
abcd 0 = ((0 * 3 + 0) * 2 + 0) * 1 + 0
abdc 1 = ((0 * 3 + 0) * 2 + 1) * 1 + 0
acbd 2 = ((0 * 3 + 1) * 2 + 0) * 1 + 0
acdb 3 = ((0 * 3 + 1) * 2 + 1) * 1 + 0
adbc 4 = ((0 * 3 + 2) * 2 + 0) * 1 + 0
adcb 5 = ((0 * 3 + 2) * 2 + 1) * 1 + 0 //pb1
bacd 6 = ((1 * 3 + 0) * 2 + 0) * 1 + 0
badc 7 = ((1 * 3 + 0) * 2 + 1) * 1 + 0
bcad 8 = ((1 * 3 + 1) * 2 + 0) * 1 + 0 //First reflexion
bcda 9 = ((1 * 3 + 1) * 2 + 1) * 1 + 0
bdac 10 = ((1 * 3 + 2) * 2 + 0) * 1 + 0
bdca 11 = ((1 * 3 + 2) * 2 + 1) * 1 + 0
cabd 12 = ((2 * 3 + 0) * 2 + 0) * 1 + 0
cadb 13 = ((2 * 3 + 0) * 2 + 1) * 1 + 0
cbad 14 = ((2 * 3 + 1) * 2 + 0) * 1 + 0
cbda 15 = ((2 * 3 + 1) * 2 + 1) * 1 + 0 //pb2
cdab 16 = ((2 * 3 + 2) * 2 + 0) * 1 + 0
cdba 17 = ((2 * 3 + 2) * 2 + 1) * 1 + 0
[...]
dcba 23 = ((3 * 3 + 2) * 2 + 1) * 1 + 0
First "reflexion" :
An entropy point of view. abcd have the fewest "entropy". If a character is in a place it "shouldn't" be, it creates entropy, and the earlier the entropy is the greatest it becomes.
For bcad for example, lexicographic index is 8 = ((1 * 3 + 1) * 2 + 0) * 1 + 0 and can be calculated that way :
value = 0;
value += max(b - a, 0); // = 1; (a "should be" in the first place [to create the less possible entropy] but instead it is b)
value *= 3 - 0; //last index - current index
value += max(c - b, 0); // = 1; (b "should be" in the second place but instead it is c)
value *= 3 - 1;
value += max(a - c, 0); // = 0; (a "should have been" put earlier, so it does not create entropy to put it there)
value *= 3 - 2;
value += max(d - d, 0); // = 0;
Note that the last operation will always do nothing, that's why "i
First problem (pb1) :
For adcb, for example, the first logic doesn't work (it leads to an lexicographic index of ((0* 3+ 2) * 2+ 0) * 1 = 4) because c-d = 0 but it creates entropy to put c before b. I added x because of that, it represents the first digit/character that isn't placed yet. With x, diff cannot be negative.
For adcb, lexicographic index is 5 = ((0 * 3 + 2) * 2 + 1) * 1 + 0 and can be calculated that way :
value = 0; x=0;
diff = a - a; // = 0; (a is in the right place)
diff == 0 => x++; //x=b now and we don't modify value
value *= 3 - 0; //last index - current index
diff = d - b; // = 2; (b "should be" there (it's x) but instead it is d)
diff > 0 => value += diff; //we add diff to value and we don't modify x
diff = c - b; // = 1; (b "should be" there but instead it is c) This is where it differs from the first reflexion
diff > 0 => value += diff;
value *= 3 - 2;
Second problem (pb2) :
For cbda, for example, lexicographic index is 15 = ((2 * 3 + 1) * 2 + 1) * 1 + 0, but the first reflexion gives : ((2 * 3 + 0) * 2 + 1) * 1 + 0 = 13 and the solution to pb1 gives ((2 * 3 + 1) * 2 + 3) * 1 + 0 = 17. The solution to pb1 doesn't work because the two last characters to place are d and a, so d - a "means" 1 instead of 3. I had to count the characters placed before that comes before the character in place, but after x, so I had to add an inner loop.
Putting it all together :
I then realised that pb1 was just a particular case of pb2, and that if you remove x, and you simply take diff = A[i], we end up with the unnested version of your solution (with factorial calculated little by little, and my diff corresponding to your x).
So, basically, my "contribution" (I think) is to add a variable, x, which can avoid doing the inner loop when diff equals 0 or 1, at the expense of checking if you have to increment x and doing it if so.
I also checked if you have to increment x in the inner loop (if(A[j]==x+1)) because if you take for example badce, x will be b at the end because a comes after b, and you will enter the inner loop one more time, encountering c. If you check x in the inner loop, when you encounter d you have no choice but doing the inner loop, but x will update to c, and when you encounter c you will not enter the inner loop. You can remove this check without breaking the program
With the alternative version and the check in the inner loop it makes 4 different versions. The alternative one with the check is the one in which you enter the less the inner loop, so in terms of "theoretical complexity" it is the best, but in terms of performance/number of operations, I don't know.
Hope all of this helps (since the question is rather old, and I didn't read all the answers in details). If not, I still had fun doing it. Sorry for the long post. Also I'm new on Stack Overflow (as a member), and not a native speaker, so please be nice, and don't hesitate to let me know if I did something wrong.
Linear traversal of memory already in cache really doesn't take much times at all. Don't worry about it. You won't be traversing enough distance before factorial() overflows.
Move the 8 out as a parameter.
int factorial ( int input )
{
return input ? input * factorial (input - 1) : 1;
}
int lexic_ix ( int* arr, int N )
{
int output = 0;
int fact = factorial (N);
for ( int i = 0; i < N - 1; i++ )
{
int order = arr [ i ];
for ( int j = 0; j < i; j++ )
order -= arr [ j ] < arr [ i ];
output += order * (fact /= N - i);
}
return output;
}
int main()
{
int arr [ ] = { 11, 10, 9, 8, 7 , 6 , 5 , 4 , 3 , 2 , 1 , 0 };
const int length = 12;
for ( int i = 0; i < length; ++i )
std::cout << lexic_ix ( arr + i, length - i ) << std::endl;
}
Say, for a M-digit sequence permutation, from your code, you can get the lexicographic SN formula which is something like: Am-1*(m-1)! + Am-2*(m-2)! + ... + A0*(0)! , where Aj range from 0 to j. You can calculate SN from A0*(0)!, then A1*(1)!, ..., then Am-1 * (m-1)!, and add these together(suppose your integer type does not overflow), so you do not need calculate factorials recursively and repeatedly. The SN number is a range from 0 to M!-1 (because Sum(n*n!, n in 0,1, ...n) = (n+1)!-1)
If you are not calculating factorials recursively, I cannot think of anything that could make any big improvement.
Sorry for posting the code a little bit late, I just did some research, and find this:
http://swortham.blogspot.com.au/2011/10/how-much-faster-is-multiplication-than.html
according to this author, integer multiplication can be 40 times faster than integer division. floating numbers are not so dramatic though, but here is pure integer.
int lexic_ix ( int arr[], int N )
{
// if this function will be called repeatedly, consider pass in this pointer as parameter
std::unique_ptr<int[]> coeff_arr = std::make_unique<int[]>(N);
for ( int i = 0; i < N - 1; i++ )
{
int order = arr [ i ];
for ( int j = 0; j < i; j++ )
order -= arr [ j ] < arr [ i ];
coeff_arr[i] = order; // save this into coeff_arr for later multiplication
}
//
// There are 2 points about the following code:
// 1). most modern processors have built-in multiplier, \
// and multiplication is much faster than division
// 2). In your code, you are only the maximum permutation serial number,
// if you put in a random sequence, say, when length is 10, you put in
// a random sequence, say, {3, 7, 2, 9, 0, 1, 5, 8, 4, 6}; if you look into
// the coeff_arr[] in debugger, you can see that coeff_arr[] is:
// {3, 6, 2, 6, 0, 0, 1, 2, 0, 0}, the last number will always be zero anyway.
// so, you will have good chance to reduce many multiplications.
// I did not do any performance profiling, you could have a go, and it will be
// much appreciated if you could give some feedback about the result.
//
long fac = 1;
long sn = 0;
for (int i = 1; i < N; ++i) // start from 1, because coeff_arr[N-1] is always 0
{
fac *= i;
if (coeff_arr[N - 1 - i])
sn += coeff_arr[N - 1 - i] * fac;
}
return sn;
}
int main()
{
int arr [ ] = { 3, 7, 2, 9, 0, 1, 5, 8, 4, 6 }; // try this and check coeff_arr
const int length = 10;
std::cout << lexic_ix(arr, length ) << std::endl;
return 0;
}
This is the whole profiling code, I only run the test in Linux, code was compiled using G++8.4, with '-std=c++11 -O3' compiler options. To be fair, I slightly rewrote your code, pre-calculate the N! and pass it into the function, but it seems this does not help much.
The performance profiling for N = 9 (362,880 permutations) is:
Time durations are: 34, 30, 25 milliseconds
Time durations are: 34, 30, 25 milliseconds
Time durations are: 33, 30, 25 milliseconds
The performance profiling for N=10 (3,628,800 permutations) is:
Time durations are: 345, 335, 275 milliseconds
Time durations are: 348, 334, 275 milliseconds
Time durations are: 345, 335, 275 milliseconds
The first number is your original function, the second is the function re-written that gets N! passed in, the last number is my result. The permutation generation function is very primitive and runs slowly, but as long as it generates all permutations as testing dataset, that is alright. By the way, these tests are run on a Quad-Core 3.1Ghz, 4GBytes desktop running Ubuntu 14.04.
EDIT: I forgot a factor that the first function may need to expand the lexi_numbers vector, so I put an empty call before timing. After this, the times are 333, 334, 275.
EDIT: Another factor that could influence the performance, I am using long integer in my code, if I change those 2 'long' to 2 'int', the running time will become: 334, 333, 264.
#include <iostream>
#include <vector>
#include <chrono>
using namespace std::chrono;
int factorial(int input)
{
return input ? input * factorial(input - 1) : 1;
}
int lexic_ix(int* arr, int N)
{
int output = 0;
int fact = factorial(N);
for (int i = 0; i < N - 1; i++)
{
int order = arr[i];
for (int j = 0; j < i; j++)
order -= arr[j] < arr[i];
output += order * (fact /= N - i);
}
return output;
}
int lexic_ix1(int* arr, int N, int N_fac)
{
int output = 0;
int fact = N_fac;
for (int i = 0; i < N - 1; i++)
{
int order = arr[i];
for (int j = 0; j < i; j++)
order -= arr[j] < arr[i];
output += order * (fact /= N - i);
}
return output;
}
int lexic_ix2( int arr[], int N , int coeff_arr[])
{
for ( int i = 0; i < N - 1; i++ )
{
int order = arr [ i ];
for ( int j = 0; j < i; j++ )
order -= arr [ j ] < arr [ i ];
coeff_arr[i] = order;
}
long fac = 1;
long sn = 0;
for (int i = 1; i < N; ++i)
{
fac *= i;
if (coeff_arr[N - 1 - i])
sn += coeff_arr[N - 1 - i] * fac;
}
return sn;
}
std::vector<std::vector<int>> gen_permutation(const std::vector<int>& permu_base)
{
if (permu_base.size() == 1)
return std::vector<std::vector<int>>(1, std::vector<int>(1, permu_base[0]));
std::vector<std::vector<int>> results;
for (int i = 0; i < permu_base.size(); ++i)
{
int cur_int = permu_base[i];
std::vector<int> cur_subseq = permu_base;
cur_subseq.erase(cur_subseq.begin() + i);
std::vector<std::vector<int>> temp = gen_permutation(cur_subseq);
for (auto x : temp)
{
x.insert(x.begin(), cur_int);
results.push_back(x);
}
}
return results;
}
int main()
{
#define N 10
std::vector<int> arr;
int buff_arr[N];
const int length = N;
int N_fac = factorial(N);
for(int i=0; i<N; ++i)
arr.push_back(N-i-1); // for N=10, arr is {9, 8, 7, 6, 5, 4, 3, 2, 1, 0}
std::vector<std::vector<int>> all_permus = gen_permutation(arr);
std::vector<int> lexi_numbers;
// This call is not timed, only to expand the lexi_numbers vector
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix2(&x[0], length, buff_arr));
lexi_numbers.clear();
auto t0 = high_resolution_clock::now();
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix(&x[0], length));
auto t1 = high_resolution_clock::now();
lexi_numbers.clear();
auto t2 = high_resolution_clock::now();
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix1(&x[0], length, N_fac));
auto t3 = high_resolution_clock::now();
lexi_numbers.clear();
auto t4 = high_resolution_clock::now();
for (auto x : all_permus)
lexi_numbers.push_back(lexic_ix2(&x[0], length, buff_arr));
auto t5 = high_resolution_clock::now();
std::cout << std::endl << "Time durations are: " << duration_cast<milliseconds> \
(t1 -t0).count() << ", " << duration_cast<milliseconds>(t3 - t2).count() << ", " \
<< duration_cast<milliseconds>(t5 - t4).count() <<" milliseconds" << std::endl;
return 0;
}

Wrong answer on SPOJ FASTFLOW?

Can anyone help me out with this problem I am trying it for days . I am getting wrong answer every time . I used Edmonds - Karp method ... Here is my code :
#include<cstdio>
#include<iostream>
#include<queue>
#include<algorithm>
#include<cstring>
#define MAXX 900000000
using namespace std;
long int capacity[5005][5005] ;
int graph[5005][5005] , v[5005] , from[5005] ;
//Calculating Max Flow using Edmond karp
int Max_Flow(int s , int t)
{ queue<int>Q ;
// Bfs to get the paths from source to sink
Q.push(s) ;
v[s] = 1 ;
int r ;
long long int min ;
while(!Q.empty())
{ int p = Q.front() ;
Q.pop();
r = 0 ;
for(int j = 0 ; graph[p][j]!=0 ; j++)
{
if(!v[graph[p][j]]&&capacity[p][graph[p][j]])
{ Q.push(graph[p][j]) ; from[graph[p][j]] = p ;
v[graph[p][j]] = 1 ;
if(graph[p][j]==t)
{ r = 1 ; break ; }
}
}
if(r==1)
break ;
}
r = t ;
min = MAXX ;
// Caculating the minimum capacity over the path found by BFS
while(from[r]!=0)
{
if(min>capacity[from[r]][r])
min = capacity[from[r]][r] ;
r = from[r] ;
}
r = t ;
//Subtracting the min capacity found over the path
while(from[r]!=0)
{
capacity[from[r]][r]-=min;
capacity[r][from[r]]+=min;
r = from[r] ;
}
if(min==MAXX)
return 0;
else
return min;
}
int main()
{
int t , n , s , c , i , j , k , a , b , p = 0 ;
unsigned long long int flow , r ;
memset(capacity,0,sizeof(capacity));
memset(from,0,sizeof(from));
memset(graph,0,sizeof(graph));
memset(v,0,sizeof(v));
scanf("%d%d",&n,&c);
for(i = 0 ; i<c ; i++)
{
scanf("%d%d%d",&a,&b,&k);
if(b!=a)
{
capacity[a][b]+=k ;
capacity[b][a]+=k ;
j = 0 ;
r = 0 ;
while(graph[a][j]!=0)
{ if(graph[a][j]==b)
{ r = 1 ; break ; }
j++;
}
if(!r) graph[a][j] = b ;
j = 0 ;
r = 0 ;
while(graph[b][j]!=0)
{ if(graph[b][j]==a)
{ r = 1 ; break ; }
j++;
}
if(!r) graph[b][j] = a ;
}
}
flow = 0 ;
r = 1 ;
while(r)
{ flow+=r ;
r = Max_Flow(1,n) ;
memset(from,0,sizeof(from));
memset(v,0,sizeof(v));
}
printf("%lld\n",flow-1);
return 0;
}
As the problem statement says : "Note that it is possible for there to be duplicate edges, as well as an edge from a node to itself" . So I ignored the self loops and added the capacity of repeated edges in the 'capacity' array corresponding to those nodes . I created a 'graph' and performed BFS from source to sink to get paths until all the paths have been augmented . I summed up all min values found and printed the answer ... Can anyone explain why wrong answer ?
Suppose you had a simple graph with a single edge between start and end with capacity 1 billion.
As your MAXX < 1 billion, when you run the Max_Flow you would find a flow of MAXX and incorrectly conclude that this meant there was no augmenting path found.
If this is the case, then simply try replacing
#define MAXX 900000000
with
#define MAXX 1100000000
and the program might pass...

Double comparison

Can I do this in C++?
if (4<5<6)
cout<<"valid"<<endl;
i.e a double comparison? Since I know that I can
bool a;
a = 1+2<3+4<5>6;//etc
Yes, you can do it, but it won't be what you expect. It's parsed as
if ( (4<5) < 6 )
which yields
if ( 1 < 6 )
because 4<5 evaluates to true which is promoted to 1, which yields, obviously, true.
You'll need
if ( (4<5) && (5<6) )
Also, yes, you can do
a = 1+2<3+4<5>6;
but that as well is parsed as
a = ((1+2)<((3+4)<5))>6;
which will evaluate to false since (1+2)<((3+4)<5) yields a boolean, which is always smaller than 6.
It compiles but won't do what you expect -
if( 4 < 5 < 2)
same as
if( (4 < 5) < 2)
same as
if( (1 < 2) ) //1 obtained from cast to boolean
which is of course true, even though I imagine you were expecting something quite different.
It may be clumsy but this will work:
int i, j, k;
i = 4; j = 5; k = 6;
if ( (i < j) && (j < k) )
{
cout << "Valid!" << endl;
}