Maximum number not coprime to V - c++

Given a fixed array A of N integers where N<=100,000 and all elements of array are also less than or equal to 100,000. The numbers in A are not monotonically increasing or contiguous or otherwise conveniently organized.
Now I am given up to 100,000 queries of the form {V, L, R} where in each query I need to find the largest number A[i] with i in the range [L,R] that is not coprime with the given value V. (That is GCD(V,A[i]) is not equal to 1.)
If it's is not possible, then also tell that all numbers in the given range are coprime to V.
A basic approach would be to iterate from each A[i] between L and R and compute GCD with value V and hence find maximum. But is there any better way to do it if the number of queries can be up to 100,000 too. In that case, it's too inefficient to check for each number each time.
Example:
Let us have N=6 and the array be [1,2,3,4,5,4] and let V be 2 and range [L,R] is [2,5].
Then the answer is 4.
Explanation:
GCD(2,2)=2
GCD(2,3)=1
GCD(2,4)=2
GCD(2,5)=1
So maximum is 4 here.

Since you have a large array but only one V, it should be faster to start by factorizing V. After that your coprime test becomes simply finding the remainder modulo each unique factor of V.

Daniel Bernstein's "Factoring into coprimes in essentially linear time" (Journal of Algorithms 54:1, 1-30 (2005)) answers a similar question, and is used to identify bad (repeat factor) RSA moduli by Nadia Heninger's "New research: There's No Need to Panic Over Factorable Keys--Just Mind Your Ps and Qs"`. The problem there is to find common factors between a huge set of very large numbers, without going a pair at a time.

Lets say that
V = p_1*...*p_n
where p_i is a prime number (you can restrict it to distinct primes only). Now the answer is
result = -1
for p_i:
res = floor(R / p_i) * p_i
if res >= L and res > result:
result = res
So if you can factorize V fast then this will be quite efficient.
EDIT I didn't notice that the array does not have to contain all integers. In that case sieve it, i.e. given prime numbers p_1, ..., p_n create a "reversed" sieve (i.e. all multiples of primes in range [L, R]). Then you can just do an intersection of that sieve with your initial array.
EDIT2 To generate the set of all multiples you can use this algorithm:
primes = [p_1, ..., p_n]
multiples = []
for p in primes:
lower = floor(L / p)
upper = floor(R / p)
for i in [lower+1, upper]:
multiples.append(i*p)
The imporatant thing is that it follows from math that V is coprime with every number in range [L, R] which is not in multiples. Now you simply do:
solution = -1
for no in initial_array:
if no in multiples:
solution = max(solution, no)
Note that if you implement result as a set, then if no in result: check is O(1).
EXAMPLE Let's say that V = 6 = 2*3 and initial_array = [7,11,12,17,21] and L=10 and R=22. Let's start with multiples. Following the algorithm we obtain that
multiples = [10, 12, 14, 16, 18, 20, 22, 12, 15, 18, 21]
First 7 are multiples of 2 (in range [10, 22]) and last 4 are multiples of 3 (in range [10, 22]). Since we are dealing with sets (std::set?) then there will be no duplicates (12 and 18):
multiples = [10, 12, 14, 16, 18, 20, 22, 15, 21]
Now go through the initial_array and check what values are in multiples. We obtain that the biggest such number is 21. And indeed 21 is not coprime with 6.

Factor each of A's elements and store, for each possible prime factor, a sorted list of the numbers that contain this factor.
Given a number n contains O(log n) prime factors, this list will use O(N log N) memory.
Then, for each query (V, L, R), search for each prime factor in V, what is the maximum number that contain that factor within [L, R] (this can be done with a simple binary search).

Related

Where is my logic falling incorrect, if the logic is partially correct? [duplicate]

I understand how the greedy algorithm for the coin change problem (pay a specific amount with the minimal possible number of coins) works - it always selects the coin with the largest denomination not exceeding the remaining sum - and that it always finds the correct solution for specific coin sets.
But for some coin sets, there are sums for which the greedy algorithm fails. For example, for the set {1, 15, 25} and the sum 30, the greedy algorithm first chooses 25, leaving a remainder of 5, and then five 1s for a total of six coins. But the solution with the minimal number of coins is to choose 15 twice.
What conditions must a set of coins fulfil so that the greedy algorithm finds the minimal solution for all sums?
A set which forms a matroid (https://en.wikipedia.org/wiki/Matroid) can be used to solve the coin changing problem by using greedy approach. In brief, a matroid is an ordered pair
M = (S,l) satisfying the following conditions:
S is a finite nonempty set
l is a nonempty family of subsets of S, called the independent subsets,such that if B->l
and A is a subset of B, then A -> l
If A-> l, B-> l and |A| < |B|, then there is some element x-> B-A such that A U {x} ->l
In our question of coin changing, S is a set of all the coins in decreasing order value
We need to achieve a value of V by minimum number of coins in S
In our case, l is an independent set containing all the subsets such that the following holds for each subset: the summation of the values in them is <=V
If our set is a matroid, then our answer is the maximal set A in l, in which no x can be further added
To check, we see if the properties of matroid hold in the set S = {25,15,1} where V = 30
Now, there are two subsets in l:
A = {25} and B= {15,15}
Since |A| < |B|, then there is some element x-> B-A such that A U {x} ->l (According 3)
So, {25,15} should belong to l, but its a contradiction since 25+15>30
So, S is not a matroid and hence greedy approach won't work on it.
In any case where there is no coin whose value, when added to the lowest denomination, is lower than twice that of the denomination immediately less than it, the greedy algorithm works.
i.e. {1,2,3} works because [1,3] and [2,2] add to the same value
however {1, 15, 25} doesn't work because (for the change 30) 15+15>25+1
A coin system is canonical if the number of coins given in change by the greedy algorithm is optimal for all amounts.
This paper offers an O(n^3) algorithm for deciding whether a coin system is canonical, where n is the number of different kinds of coins.
For a non-canonical coin system, there is an amount c for which the greedy algorithm produces a suboptimal number of coins; c is called a counterexample. A coin system is tight if its smallest counterexample is larger than the largest single coin.
This is a recurrence problem. Given a set of coins {Cn, Cn-1, . . ., 1}, such that for 1 <= k <= n, Ck > Ck-1, the Greedy Algorithm will yield the minimum number of coins if Ck > Ck-1 + Ck-2 and for the value V=(Ck + Ck-1) - 1, applying the Greedy Algorithm to the subset of coins {Ck, Ck-1, . . ., 1}, where Ck <= V, results in fewer coins than the number resulting from applying the Greedy Algorithm to the subset of coins {Ck-1, Ck-2, . . ., 1}.
The test is simple: for `1 <= k <= n test the number of coins the Greedy Algorithm yields for a value of Ck + Ck-1 - 1. Do this for coin set {Ck, Ck-1, . . ., 1} and {Ck-1, Ck-2, . . ., 1}. If for any k, the latter yields fewer coins than the former, the Greedy Algorithm will not work for this coin set.
For example, with n=4, consider the coin set {C4, C3, C2, 1} = {50,25,10,1}. Start with k=n=4, then V = Cn + Cn-1 - 1 = 50+25-1 = 74 as test value. For V=74, G{50,25,10,1} = 7 coins. G{25, 10, 1} = 8 coins. So far, so good. Now let k=3. then V=25+10-1=34. G{25, 10, 1} = 10 coins but G{10, 1} = 7 coins. So, we know that that the Greedy Algorithm will not minimize the number of coins for the coin set {50,25,10,1}. On the other hand, if we add a nickle to this coin set, G{25, 10, 5, 1} = 6 and G{10, 5, 1} = 7. Likewise, for V=10+5-1=14, we get G{10, 5, 1} = 5, but G{5,1} = 6. So, we know, Greedy works for {50,25,10,5,1}.
That begs the question: what should be the denomination of coins, satisfying the Greedy Algorithm, which results in the smallest worst case number of coins for any value from 1 to 100? The answer is quite simple: 100 coins, each with a different value 1 to 100. Arguably this is not very useful since it linear search of coins with every transaction. Not to mention the expense of minting so many different denominations and tracking them.
Now, if we want to primarily minimize the number of denominations while secondarily minimizing the resulting number of coins for any value from 1 to 100 produced by Greedy, then coins in denominations of powers of 2: {64, 32, 16, 8, 4, 2, 1} result in a maximum of 6 coins for any value 1:100 (the maximum number of 1's in a seven bit number whose value is less than decimal 100). But this requires 7 denominations of coin. The worst case for the five denominations {50, 25, 10, 5, 1} is 8, which occurs at V=94 and V=99. Coins in powers of 3 {1, 3, 9, 27, 81} also require only only 5 denominations to be serviceable by Greedy but also yield a worst case of 8 coins at values of 62 and 80. Finally, using any the five denomination subset of {64, 32, 16, 8, 4, 2, 1} which cannot exclude '1', and which satisfy Greedy, will also result in a maximum of 8 coins. So there is a linear trade-off. Increasing the number of denominations from 5 to 7 reduces the maximum number of coins that it takes to represent any value between 1 and 100 from 8 to 6, respectively.
On the other hand, if you want to minimize the number of coins exchanged between a buyer and a seller, assuming each has at least one coin of each denomination in their pocket, then this problem is equivalent to the fewest weights it takes to balance any weight from 1 to N pounds. It turns out that the fewest number of coins exchanged in a purchase is achieved if the coin denominations are given in powers of 3: {1, 3, 9, 27, . . .}.
See https://puzzling.stackexchange.com/questions/186/whats-the-fewest-weights-you-need-to-balance-any-weight-from-1-to-40-pounds.
Theory:
If the greedy algorithm always produces an optimal answer for a given set of coins, you say that set is canonical.
Stating the best known algorithmic test [O(n^3)] for determining whether an arbitrary set of n coins is canonical, as succinctly as I can:
[c1,c2,..cn] is canonical iff for all w_ij |G(w_ij)| = |M(w_ij)|, 1 < i <= j <= n
where [c1,c2,...cn] is the list of coin denominations sorted descending with cn = 1
G(x) represents the coin vector result of running the greedy algorithm on input x, (returned as [a1, a2,..., an] where ai is the count of ci)
M(x) represents a coin vector representation of x which uses the fewest coins
|V| represents the size of the coin vector V: the total number of coins in the vector
and w_ij is the evaluated value of the coin vector produced by G(c_(i-1) - 1) after incrementing its j'th coin by 1 and zeroing all its coin counts from j+1 to n.
Implementation (JavaScript):
/**
* Check if coins can be used greedily to optimally solve change-making problem
* coins: [c1, c2, c3...] : sorted descending with cn = 1
* return: [optimal?, minimalCounterExample | null, greedySubOptimal | null] */
function greedyIsOptimal(coins) {
for (let i = 1; i < coins.length; i++) {
greedyVector = makeChangeGreedy(coins, coins[i - 1] - 1)
for (let j = i; j < coins.length; j++) {
let [minimalCoins, w_ij] = getMinimalCoins(coins, j, greedyVector)
let greedyCoins = makeChangeGreedy(coins, w_ij)
if (coinCount(minimalCoins) < coinCount(greedyCoins))
return [false, minimalCoins, greedyCoins]
}
}
return [true, null, null]
}
// coins [c1, c2, c3...] sorted descending with cn = 1 => greedy coinVector for amount
function makeChangeGreedy(coins, amount) {
return coins.map(c => {
let numCoins = Math.floor(amount / c);
amount %= c
return numCoins;
})
}
// generate a potential counter-example in terms of its coinVector and total amount of change
function getMinimalCoins(coins, j, greedyVector) {
minimalCoins = greedyVector.slice();
minimalCoins[j - 1] += 1
for (let k = j; k < coins.length; k++) minimalCoins[k] = 0
return [minimalCoins, evaluateCoinVector(coins, minimalCoins)]
}
// return the total amount of change for coinVector
const evaluateCoinVector = (coins, coinVector) =>
coins.reduce((change, c, i) => change + c * coinVector[i], 0)
// return number of coins in coinVector
const coinCount = (coinVector) =>
coinVector.reduce((count, a) => count + a, 0)
/* Testing */
let someFailed = false;
function test(coins, expect) {
console.log(`testing ${coins}`)
let [optimal, minimal, greedy] = greedyIsOptimal(coins)
if (optimal != expect) (someFailed = true) && console.error(`expected optimal=${expect}
optimal: ${optimal}, amt:${evaluateCoinVector(coins, minimal)}, min: ${minimal}, greedy: ${greedy}`)
}
// canonical examples
test([25, 10, 5, 1], true) // USA
test([240, 60, 24, 12, 6, 3, 1], true) // Pound Sterling - 30
test([240, 60, 30, 12, 6, 3, 1], true) // Pound Sterling - 24
test([16, 8, 4, 2, 1], true) // Powers of 2
test([5, 3, 1], true) // Simple case
// non-canonical examples
test([240, 60, 30, 24, 12, 6, 3, 1], false) // Pound Sterling
test([25, 12, 10, 5, 1], false) // USA + 12c
test([25, 10, 1], false) // USA - nickel
test([4, 3, 1], false) // Simple cases
test([6, 5, 1], false)
console.log(someFailed ? "test(s) failed" : "All tests passed.")
Well we really need to reformulate this question...greedy algorithm essentially doing is that it tries to obtain the target value using the provided coin denominations. Any change you make to the greedy algorithm simply change the way of reaching the target value.
It does not account for the minimum coins used....
To put in a better way a safe move does not existed for this problem.
A higher denomination coin may yield target value quickly but it is not a safe move.
Example {50,47,51,2,9} to obtain 100
Greedy choice would be to take highest denomination coin to reach 100 more quickly..
51+47+2
Well it reached but 50+50 should do..
Lets take {50,47,51,9} to obtain 100
If it makes a greedy choice of highest coin
51 it needs for 49 from the set. It does not know whether it is possible or not. It tries to reach 100 but it cannot
And changing greedy choice simply changes the way of reaching the 100
These types of problems creates set of solutions and forms of branch of decision tree.
Today,I solved question similar to this on Codeforces(link will be provided at then end).
My conclusion was that for coin-change problem to get solved by Greedy alogrithm, it should statisfy following condition:-
1.On sorting coin values in ascending order, all values to the greater than current element should be divisible by the current element.
e.g. coins = {1, 5, 10, 20, 100}, this will give correct answer since {5,10, 20, 100} all are divisible by 1,{10,20, 100} all are divisible by 5,{20,100} all are divisible by 10,{100} all are divisible by 20.
Hope this gives some idea.
996 A - Hit the lottery
https://codeforces.com/blog/entry/60217
An easy to remember case is that any set of coins such that, if they are sorted in ascending order and you have:
coin[0] = 1
coin[i+1] >= 2 * coin[i], for all i = 0 .. N-1 in coin[N]
Then a greedy algorithm using such coins will work.
Depending on the range you're querying, there may be more optimal (in terms of number of coins required) allocation. An example of this is if you're considering the range (6..8) and you have the coins <6, 7, 8> instead of <1, 2, 4, 8>.
The most efficient allocation of coins that is complete over N+ is at equality of the above set of rules, giving you the coins 1, 2, 4, 8 ...; which merely is the binary representation of any number. In some sense, conversation between bases is a greedy algorithm in itself.
A proof on the >= 2N inequality is provided by Max Rabkin in this discussion:
http://olympiad.cs.uct.ac.za/old/camp0_2008/day1_camp0_2008_discussions.pdf

Count Divisors of Product from L to R

I have been solving a problem but then got stuck upon its subpart which is as follows:
Given an array of N elements whose ith element is A[i] and we are given Q queries of the type [L,R].
For each query output the number of divisors of product from Lth element to Rth element.
More formally, for each query lets define P as P = A[L] * A[L+1] * A[L+2] * ...* A[R].
Output the number of divisors of P modulo 998244353.
Constraints : 1<= N,Q <= 100000, 1<= A[i] <= 1000000.
My Approach,
For each index i, I have defined a map< int, int > which stores the prime divisor and its count in the product up to [1, i].
I am extracting the prime divisors of a number in O(LogN) using Sieve.
Then for each query (lets say {L,R} ), I am iterating through the map of Lth element and subtracting the count of each each key from the map of Rth element.
And then I am answering the query using the result:
if N = a^p * b^q * c^r ...(a,b,c being primes)
the number of divisors = (p+1)(q+1)(r+1)..
The time complexity of above solution is O(ND + QD), where D = number of distinct prime numbers upto 1000000. In worst case D = 78498.
Is there more efficient solution than this?
There is a more efficient solution for this. But it is slightly complicated. Here are steps to get to the necessary data structure.
Define a data type prime_factor that is a struct that contains a prime and a count.
Define a data type prime_factorization that is a vector of the first data type in ascending size of the primes. This can store the factorization of a number.
Write a function that takes a number, and turns its prime factorization into a prime_factorization
Write a function that takes 2 prime_factorization vectors and merges them into the factorization of the product of the two.
For each number in your array, compute its prime factorization. That gets stored in an array.
For each pair in your array, compute the prime factorization of the product. We will only need half of them. So elements 0, 1 go into one factorization, 2, 3 into the next and so on.
Repeat step 6 O(log(N)) times. So you have a vector of the factorization of each number, pairs, fours, eights, and so on. This results in approximately 2N precomputed factorization vectors. Most vectors are small though a few can be up to O(D) in size (where D is the number of distinct primes). Most of the merges should be very, very fast.
And now you have all of your data prepared. It can't take more than O(log(N)) times the space that storing the prime factors required by itself. (Less than that normally, though, because repeats among the small primes get gathered together in one prime_factor.)
Any range is the union of at most O(log(N)) of these computed vectors. For example the range 10..25 can be broken up into 10..11, 12..15, 16..24, 25. Arrange these intervals from smallest to largest and merge them. Then compute your answer from the result.
An exact analysis is complicated. But I assure you that query time is bounded above by O(Q * D * log(N)) and normally is much less than that.
UPDATE:
How do you find those intervals?
The answer is that you need to identify the number divisible by the highest power of 2 in the range, and then fill out both sides from there. And you figure that out by dividing by 2 (rounding down) until the range is of length 1. Then multiply the top boundary by 2 to find that mid-point.
For example if your range was 35-53 you would start by dividing by 2 to get 35-53, 17-26, 8-13, 4-6, 2-3. That was 2^4 we divided by. our power of 2 mid-point is 3*2^4 = 48. Our intervals above that midpoint are then 48-52, 53-53. Our intervals below are 40-47, 36-39, 35-35. And each of them is of length a power of 2 and starts at a number divisible by that power of 2.

Subsequence having sum at most 'k'

Given a non decreasing array A of size n and an integer k, how to find a subsequence S of the array A with maximum possible sum of its elements, such that this sum is at most k. If there are multiple such subsequences, we are interested in finding only one.
For example, let the array be {1, 2, 2, 4} so, n = 4 and let k = 7. Then, the answer should be {1, 2, 4}.
Brute force approach takes approximately O(n(2^n-1)) but is there a more efficient solution to this problem?
In the general case the answer is no.
Just deciding if there is a solution where elements sum up to k is equivalent to the Subset Sum Problem and thus already NP-complete.
The Subset Sum Problem can be equivalently formulated as: given the integers or
natural numbers w_1,... ,w_n does any subset of them sum to precisely W
However, if either n or the number of bits P that it takes to represent the largest number w is small there might be more efficient solution (e.g., a pseudo-polynomial solution based on dynamic programming if P is small). Additionally, if all your numbers w are positive then it might also be possible to find a better solution.

How to count co-prime to n in a range [1,x] where x can be different from n?

I want to count co-primes of n in a range [1,x]. I have tried using euler phi function but it gives for [1,n].Can anyone suggest a modification to euler phi or any other approach to do this?
I have used phi(n) = n* product of (1-(1/p)) where p is a prime factor of n.
You can use Inclusion-Exclusion Principle
Find the unique prime factors of N (they cannot be more than 10-12, Considering N and X <=10^10).
Now you can find the number of numbers <=x and divisible by 'y' just by dividing. Try all combination of factors of n for y (you will get only 2^10 (1024) in worst case).
Use Inclusion Exclusion now to find the co-primes of n less than x.
The idea is that if a number is not co-prime to n, then it will have
at least one prime factor common with n.
For our example here lets consider X=35 and N=30
First find the unique prime factor of the number. (their number must not be greater than 10-12). Unique Prime factor of N ={2,3,5}.
Find the product of each factor PAIR. {2x3, 2x5, 3x5 or 6, 10, 15}.
Find the product of each factor TRIPLET: { 2x3x5 or 30}.
Repeat until all factors are multiplied together: {N=30 and no more steps are required}.
Find the sum of X divided by each factor from STEP 1: {X=35: (35/2)+(35/3)+(35/5) = (17+11+7)=35}
Find the sum of X divided by each number from STEP 2: {X=35: 35/65+3+2=10}
Find the sum of X divided by each number from STEP 3: {X=35: 1}
Repeat until all results from step 4 are absorbed: {x=35 no more steps are required}
Number of co-primes to N in the range [1..X] = X - step5 + step6 - step7 etc. {N=30, X=35 is given by 35 - 35 + 10 - 1 = 9}.
For N=30, X=60 you will have:
60 - (60/2 + 60/3 + 60/5) + (60/6 + 60/10+ 60/15) - (60/30) = 60 -
(30+20+12) + (10+6+4) - 2 = 60 -62 + 20 - 2 = 16.
Suppose X = 10. N = 6 = 2 * 3.
We have the numbers {1, 2, 3, ..., 10}.
Remove all multiples of 2. You get: {1, 3, 5, 7, 9}.
Remove all multiples of 3. You get: {1, 5, 7}.
How do we count this efficiently? Try answering this question: How many numbers are there in [1, X] that are divisible by p? It's Floor(X/p), right? i.e., p, 2p, ..., kp, where kp <= X. So, from X, we can subtract Floor(X/p),
and you will get the number of numbers that are relatively prime to p in [1, X].
In this example, there are 10 numbers. Number of numbers divisible by 2 is 10/2, which is 5. So, 10-5 = 5 numbers are relatively prime to 2. Similary, there are 10/3=3 numbers which are multiples of 3. So, can we say that there are 5-3=2 numbers that are relatively prime to 2 and 3? No. Because you have double counted! Why? 6 has been included in the count for p = 2 and 3. So we have to account for this by adding multiples of 2 and 3. There is only one multiple of 2 and 3 in [1, 10], which is 6. So, add 1. Which means, the answer is 10 - 5 - 3 + 1 = 3, which is right.
The generalisation of this is the inclusion and exclusion principle. For every n, we are just finding its prime factors, which I know for sure will be less than 10 or so. This is done using the Sieve of Eratosthenes, followed by a prime factorisation. (since X < 10^9, the maximum number of prime factors a number will have is less. Try finding out the product of the first 10 primes. It will be: 6469693230, which is about ~ 64*10^9.(i consider max limit as 10^10. this can be easily extended to big numbers like 10^18.)
i hope this helps !!

Find a prime number?

To find whether N is a prime number we only need to look for all numbers less or equal to sqrt(N). Why is that? I am writing a C code so trying to understand a reason behind it.
N is prime if it is a positive integer which is divisible by exactly two positive integers, 1 and N. Since a number's divisors cannot be larger than that number, this gives rise to a simple primality test:
If an integer N, greater than 1, is not divisible by any integer in the range [2, N-1], then N is prime. Otherwise, N is not prime.
However, it would be nice to modify this test to make it faster. So let us investigate.
Note that the divisors of N occur in pairs. If N is divisible by a number M, then it is also divisible by N/M. For instance, 12 is divisble by 6, and so also by 2. Furthermore, if M >= sqrt(N), then N/M <= sqrt(N).
This means that if no numbers less than or equal to sqrt(N) divide N, no numbers greater than sqrt(N) divide N either (excepting 1 and N themselves), otherwise a contradiction would arise.
So we have a better test:
If an integer N, greater than 1, is not divisible by any integer in the range [2, sqrt(N)], then N is prime. Otherwise, N is not prime.
if you consider the reasoning above, you should see that a number which passes this test also passes the first test, and a number which fails this test also fails the first test. The tests are therefore equivalent.
A composite number (one that is not prime, or 1) has at least 1 pair of factors, and it is guaranteed that one of the numbers from each pair is less than or equal to the square root of the number (which is what you are asking about).
If you square the square root of the number, you get the number itself (sqrt(n) * sqrt(n) = n), so if you made one of the numbers bigger (than sqrt(n)) you would have to make the other one smaller. If you then only check the numbers 2 through sqrt(n) you will have checked all of the possible factors, since each of those factors will be paired with a number that is greater than sqrt(n) (except of course if the number is in fact a square of some other number, like 4, 9, 16, etc...but that doesn't matter since you know they aren't prime; they are easily factored by sqrt(n) itself).
The reason is simple, any number bigger than the sqrt, will cause the other multiplier, to be smaller than the sqrt. In such case, you should have already check it.
Let n=a×b be composite.
Assume a>sqrt(n) and b>sqrt(n).
a×b > sqrt(n)×sqrt(n)
a×b > n
But we know a×b=n, therefore a<sqrt(n) or b<sqrt(n).
Since you only need to know a or b to show n is composite, you only need to check the numbers up to sqrt(n) to find such a number.
Because in the worst case, number n can be expresed as a2.
If the number can be expressed diferently, that men that one of divisors will be less than a = sqrt(n), but the other can be greater.