Maxima: shuffle order of elements in a list? - list

Question
Given a list L:[a,b,c,d,e,f] is there a built in way of randomly shuffling the elements of the list? Something like:
M:random_order(L);
> [ b, c, d, a, e, f]
I checked the documentation for Functions and Variables for Lists for any built in option for shuffling the order of elements in a list, but didn't see anything obvious.
Context
I'm trying to generate lists of x terms whose maximal sum is s. Right now, I'm doing this by creating a list where each term is a random number between 1 and the max value that ensures that, if the remaining terms have a minimal value of 1, then the sum will be, at most, s:
/* `x` is the total number of terms; `s` is the max sum */
gen_val(x, s):=block([x:x, s:s, vals:makelist(nul,i,x) ],
/*
the first value is a random integer in [1, (s-x)], if
vals[1] = (s-x), then all remaining terms have to be equal to 1
*/
vals[1]: 1 + random(s-x),
/*
subsequent terms are assigned in the same way, subtracting the sum of
previously assigned values, as well as reserving at least 1 unit for
each remaining term
*/
for i:2 thru x
do vals[i]:1 + random(s-sum(vals[k],k,1,i-1)-(x-i+1)),
/* return the list */
vals
);
However, this generates lists where earlier terms (i.e. lower index) have a higher probability of having higher values; whereas I'd like a more even distribution of the values.
The simplest solution I could think of was simply shuffling the elements of the vals list; however, I'd be equally interested in any other method that achieves this desired result (i.e. a list of x terms whose sum is, at most, s).
The even broader context is the problem of dividing an interval of the number line into sub-intervals. I decided to take the length of the interval and the number of partitions as my variables for building the sub-intervals, hence the goal, stated above. If I = [a, b] is the full interval, then for any c, d such that c+d =< b-a we can define the subintervals [a, a+c], [a+c, a+c+d], [a+c+d, b]

Related

What would be the fastest algorithm to randomly select N items from a list based on weights distribution?

I have a large list of items, each item has a weight.
I'd like to select N items randomly without replacement, while the items with more weight are more probable to be selected.
I'm looking for the most performing idea. Performance is paramount. Any ideas?
If you want to sample items without replacement, you have lots of options.
Use a weighted-choice-with-replacement algorithm to choose random indices. There are many algorithms like this. One of them is WeightedChoice, described later in this answer, and another is rejection sampling, described as follows. Assume that the highest weight is max, there are n weights, and each weight is 0 or greater. To choose an index in [0, n) using rejection sampling:
Choose a uniform random integer i in [0, n).
With probability weights[i]/max, return i. Otherwise, go to step 1. (For example, if all the weights are integers greater than 0, choose a uniform random integer in [1, max] and if that number is weights[i] or less, return i, or go to step 1 otherwise.)
Each time the weighted choice algorithm chooses an index, set the weight for the chosen index to 0 to keep it from being chosen again. Or...
Assign each index an exponentially distributed random number (with a rate equal to that index's weight), make a list of pairs assigning each number to an index, then sort that list by those numbers. Then take each item from first to last, in ascending order. This sorting can be done on-line using a priority queue data structure (a technique that leads to weighted reservoir sampling). Notice that the naïve way to generate the random number, -ln(1-RNDU01())/weight, where RNDU01() is a uniform random number in [0, 1], is not robust, however ("Index of Non-Uniform Distributions", under "Exponential distribution").
Tim Vieira gives additional options in his blog.
A paper by Bram van de Klundert compares various algorithms.
EDIT (Aug. 19): Note that for these solutions, the weight expresses how likely a given item will appear first in the sample. This weight is not necessarily the chance that a given sample of n items will include that item (that is, an inclusion probability). The methods given above will not necessarily ensure that a given item will appear in a random sample with probability proportional to its weight; for that, see "Algorithms of sampling with equal or unequal probabilities".
Assuming you want to choose items at random with replacement, here is pseudocode implementing this kind of choice. Given a list of weights, it returns a random index (starting at 0), chosen with a probability proportional to its weight. This algorithm is a straightforward way to implement weighted choice. But if it's too slow for you, see my section "Weighted Choice With Replacement" for a survey of other algorithms.
METHOD WChoose(weights, value)
// Choose the index according to the given value
lastItem = size(weights) - 1
runningValue = 0
for i in 0...size(weights) - 1
if weights[i] > 0
newValue = runningValue + weights[i]
lastItem = i
// NOTE: Includes start, excludes end
if value < newValue: break
runningValue = newValue
end
end
// If we didn't break above, this is a last
// resort (might happen because rounding
// error happened somehow)
return lastItem
END METHOD
METHOD WeightedChoice(weights)
return WChoose(weights, RNDINTEXC(Sum(weights)))
END METHOD
Let A be the item array with x itens. The complexity of each method is defined as
< preprocessing_time, querying_time >
If sorting is possible: < O(x lg x), O(n) >
sort A by the weight of the itens.
create an array B, for example:
B = [ 0, 0, 0, x/2, x/2, x/2, x/2, x/2 ].
it's clear to see that B has a bigger probability from choosing x/2.
if you haven't picked n elements yet, choose a random element e from B.
pick a random element from A within the interval e : x-1.
If iterating through the itens is possible: < O(x), O(tn) >
iterate through A and find the average weight w of the elements.
define the maximum number of tries t.
try (at most t times) to pick a random number in A whose weight is bigger than w.
test for some t that gives you good/satisfactory results.
If nothing above is possible: < O(1), O(tn) >
define the maximum number of tries t.
if you haven't picked n elements yet, take t random elements in A.
pick the element with biggest value.
test for some t that gives you good/satisfactory results.

Faster way of searching array of sets

I have an array containing 100,000 sets. Each set contains natural numbers below 1,000,000. I have to find the number of ordered pairs {m, n}, where 0 < m < 1,000,000, 0 < n < 1,000,000 and m != n, which do not exist together in any of 100,000 sets. A naive method of searching through all the sets leads to 10^5 * (10^6 choose 2) number of searches.
For example I have 2 sets set1 = {1,2,4} set2 = {1,3}. All possible ordered pairs of numbers below 5 are {1,2}, {1,3}, {1,4}, {2,3}, {2,4} and {3,4}. The ordered pairs of numbers below 5 which do not exist together in set 1 are {1,3},{2,3} and {3,4}. The ordered pairs below 5 missing in set 2 are {1,2},{1,4},{2,3},{2,4} and {3,4}. The ordered pairs which do not exist together in both the sets are {2,3} and {3,4}. So the count of number of ordered pairs missing is 2.
Can anybody point me to a clever way of organizing my data structure so that finding the number of missing pairs is faster? I apologize in advance if this question has been asked before.
Update:
Here is some information about the structure of my data set.
The number of elements in each set varies from 2 to 500,000. The median number of elements is around 10,000. The distribution peaks around 10,000 and tapers down in both direction. The union of the elements in the 100,000 sets is close to 1,000,000.
If you are looking for combinations across sets, there is a way to meaningfully condense your dataset, as shown in frenzykryger's answer. However, from your examples, what you're looking for is the number of combinations available within each set, meaning each set contains irreducible information. Additionally, you can't use combinatorics to simply obtain the number of combinations from each set either; you ultimately want to deduplicate combinations across all sets, so the actual combinations matter.
Knowing all this, it is difficult to think of any major breakthroughs you could make. Lets say you have i sets and a maximum of k items in each set. The naive approach would be:
If your sets are typically dense (i.e. contain most of the numbers between 1 and 1,000,000), replace them with the complement of the set instead
Create a set of 2 tuples (use a set structure that ensures insertion is idempotent)
For each set O(i):
Evaluate all combinations and insert into set of combinations: O(k choose 2)
The worst case complexity for this isn't great, but assuming you have scenarios where a set either contains most of the numbers between 0 and 1,000,000, or almost none of them, you should see a big improvement in performance.
Another approach would be to go ahead and use combinatorics to count the number of combinations from each set, then use some efficient approach to find the number of duplicate combinations among sets. I'm not aware of such an approach, but it is possible it exists.
First lets solve more simple task of counting number of elements not present in your sets. This task can be reworded in more simple form - instead of 100,000 sets you can think about 1 set which contains all your numbers. Then number of elements not present in this set is x = 1000000 - len(set). Now you can use this number x to count number of combinations. With repetitions: x * x, without repetitions: x * (x - 1). So bottom line of my answer is to put all your numbers in one big set and use it's length to find number of combinations using combinatorics.
Update
So above we have a way to find number of combinations where each element in combination is not in any of the sets. But question was to find number of combinations where each combination is not present in any of the sets.
Lets try to solve simpler problem first:
your sets have all numbers in them, none missing
each number is present exactly in one set, no duplicates across sets
How you would construct such combinations over such sets? You would simply pick two elements from different sets and resulting combination would not be in any of the sets. Number of such combinations could be counted using following code (it accepts sizes of the sets):
int count_combinations(vector<int>& buckets) {
int result = 0;
for (int i=0; i < buckets.size(); ++i) {
for (int j=i+1; j < buckets.size(); ++j) {
result += buckets[i] * buckets[j];
}
}
return result;
}
Now let's imagine that some numbers are missing. Then we can just add additional set with those missing numbers to our sets (as a separate set). But we also need to account that given there were n missing numbers there would be n * (n-1) combinations constructed using only these missing numbers. So following code will produce total number of combinations with account to missing numbers:
int missing_numbers = upper_bound - all_numbers.size() - 1;
int missing_combinations = missing_numbers * (missing_numbers - 1);
return missing_combinations + count_combinations(sets, missing_numbers);
Now lets imagine we have a duplicate across two sets: {a, b, c}, {a, d}.
What types of errors they will introduce? Following pairs: {a, a} - repetition, {a, d} - combination which is present in second set.
So how to treat such duplicates? We need to eliminate them completely from all sets. Even single instance of a duplicate will produce combination present in some set. Because we can just pick any element from the set where duplicate was removed and produce such combination (in my example - if we will keep a in first set, then pick d from the second to produce {a, d}, if we will keep a in second set, then pick b or c from the first to produce {a, b} and {a, c}). So duplicates shall be removed.
Update
However we can't simply remove all duplicates, consider this counterexample:
{a, b} {a, c} {d}. If we simply remove a we will acquire {b} {c} {d} and lost information about not-existing combination {a, d}. Consider another counterexample:
{a, b} {a, b, c} {b, d}. If we simply remove duplicates we will acquire {c} {d} and lost information about {a, d}.
Also we can't simply apply such logic to pairs of sets, a simple counter example for numbers < 3: {1, 2} {1} {2}. Here number of missing combinations is 0, but we will incorrectly count in {1, 2} if we will apply duplicates removal to pair of sets. Bottom line is that I can't come up with good technique which will help to correctly handle duplicate elements across sets.
What you can do, depending on memory requirements, is take advantage of the ordering of Set, and iterate over the values smartly. Something like the code below (untested). You'll iterate over all of your sets, and then for each of your sets you'll iterate over their values. For each of these values, you'll check all of the values in the set after them. Our complexity is reduced to the number of sets times the square of their sizes. You can use a variety of methods to keep track of your found/unfound count, but using a set should be fine, since insertion is simply O(log(n)) where n is no more than 499999500000. In theory using a map of sets (mapping based on the first value) could be slightly faster, but in either case the cost is minimal.
long long numMissing(const std::array<std::set<int>, 100000>& sets){
std::set<pair<int, int> > found;
for (const auto& s : sets){
for (const auto& m : s){
const auto &n = m;
for (n++; n != s.cend(); n++){
found.emplace(m, n);
}
}
}
return 499999500000 - found.size();
}
As an option you can build Bloom Filter(s) over your sets.
Before checking against all sets you can quickly lookup at your bloom filter and since it will never produce false negatives you can safely use your pair as its not present in your sets.
Physically storing each possible pair would take too much memory. We have 100k sets and an average set has 10k numbers = 50M pairs = 400MB with int32 (and set<pair<int, int>> needs much more than 8 bytes per element).
My suggestion is based on two ideas:
don't store, only count the missing pairs
use interval set for compact storage and fast set operations (like boost interval set)
The algorithm is still quadratic on the number of elements in the sets but needs much less space.
Algorithm:
Create the union_set of the individual sets.
We also need a data structure, let's call it sets_for_number to answer this question: which sets contain a particular number? For the simplest case this could be unordered_map<int, vector<int>> (vector stores set indices 0..99999)
Also create the inverse sets for each set. Using interval sets this takes only 10k * 2 * sizeof(int) space per set on average.
dynamic_bitset<> union_set = ...; //union of individual sets (can be vector<bool>)
vector<interval_set<int>> inverse_sets = ...; // numbers 1..999999 not contained in each set
int64_t missing_count = 0;
for(int n = 1; n < 1000000; ++n)
// count the missing pairs whose first element is n
if (union_set.count(n) == 0) {
// all pairs are missing
missing_count += (999999 - n);
} else {
// check which second elements are not present
interval_set<int> missing_second_elements = interval_set<int>(n+1, 1000000);
// iterate over all sets containing n
for(int set_idx: sets_for_number.find(n)) {
// operator&= is in-place intersection
missing_second_elements &= inverse_sets[set_idx];
}
// counting the number of pairs (n, m) where m is a number
// that is not present in any of the sets containing n
for(auto interval: missing_second_elements)
missing_count += interval.size()
}
}
If it is possible, have a set of all numbers and remove each of the number when you insert to your array of set. This will have a O(n) space complexity.
Of course if you don't want to have high spec complexity, maybe you can have a range vector. For each element in the vector, you have a pair of numbers which are the start/end of a range.

Algorithm to find isomorphic set of permutations

I have an array of set of permutations, and I want to remove isomorphic permutations.
We have S sets of permutations, where each set contain K permutations, and each permutation is represented as and array of N elements. I'm currently saving it as an array int pset[S][K][N], where S, K and N are fixed, and N is larger than K.
Two sets of permutations, A and B, are isomorphic, if there exists a permutation P, that converts elements from A to B (for example, if a is an element of set A, then P(a) is an element of set B). In this case we can say that P makes A and B isomorphic.
My current algorithm is:
We choose all pairs s1 = pset[i] and s2 = pset[j], such that i < j
Each element from choosen sets (s1 and s2) are numered from 1 to K. That means that each element can be represented as s1[i] or s2[i], where 0 < i < K+1
For every permutation T of K elements, we do the following:
Find the permutation R, such that R(s1[1]) = s2[1]
Check if R is a permutation that make s1 and T(s2) isomorphic, where T(s2) is a rearrangement of the elements (permutations) of the set s2, so basically we just check if R(s1[i]) = s2[T[i]], where 0 < i < K+1
If not, then we go to the next permutation T.
This algorithms works really slow: O(S^2) for the first step, O(K!) to loop through each permutation T, O(N^2) to find the R, and O(K*N) to check if the R is the permutation that makes s1 and s2 isomorphic - so it is O(S^2 * K! * N^2).
Question: Can we make it faster?
You can sort and compare:
// 1 - sort each set of permutation
for i = 0 to S-1
sort(pset[i])
// 2 - sort the array of permutations itself
sort(pset)
// 3 - compare
for i = 1 to S-1 {
if(areEqual(pset[i], pset[i-1]))
// pset[i] and pset[i-1] are isomorphic
}
A concrete example:
0: [[1,2,3],[3,2,1]]
1: [[2,3,1],[1,3,2]]
2: [[1,2,3],[2,3,1]]
3: [[3,2,1],[1,2,3]]
After 1:
0: [[1,2,3],[3,2,1]]
1: [[1,3,2],[2,3,1]] // order changed
2: [[1,2,3],[2,3,1]]
3: [[1,2,3],[3,2,1]] // order changed
After 2:
2: [[1,2,3],[2,3,1]]
0: [[1,2,3],[3,2,1]]
3: [[1,2,3],[3,2,1]]
1: [[1,3,2],[2,3,1]]
After 3:
(2, 0) not isomorphic
(0, 3) isomorphic
(3, 1) not isomorphic
What about the complexity?
1 is O(S * (K * N) * log(K * N))
2 is O(S * K * N * log(S * K * N))
3 is O(S * K * N)
So the overall complexity is O(S * K * N log(S * K * N))
There is a very simple solution for this: transposition.
If two sets are isomorphic, it means a one-to-one mapping exists, where the set of all the numbers at index i in set S1 equals the set of all the numbers at some index k in set S2. My conjecture is that no two non-isomorphic sets have this property.
(1) Jean Logeart's example:
0: [[1,2,3],[3,2,1]]
1: [[2,3,1],[1,3,2]]
2: [[1,2,3],[2,3,1]]
3: [[3,2,1],[1,2,3]]
Perform ONE pass:
Transpose, O(n):
0: [[1,3],[2,2],[3,1]]
Sort both in and between groups, O(something log something):
0: [[1,3],[1,3],[2,2]]
Hash:
"131322" -> 0
...
"121233" -> 1
"121323" -> 2
"131322" -> already hashed.
0 and 3 are isomorphic.
(2) vsoftco's counter-example in his comment to Jean Logeart's answer:
A = [ [0, 1, 2], [2, 0, 1] ]
B = [ [1, 0, 2], [0, 2, 1] ]
"010212" -> A
"010212" -> already hashed.
A and B are isomorphic.
You can turn each set into a transposed-sorted string or hash or whatever compressed object for linear-time comparison. Note that this algorithm considers all three sets A, B and C as isomorphic even if one p converts A to B and another p converts A to C. Clearly, in this case, there are ps to convert any one of these three sets to the other, since all we are doing is moving each i in one set to a specific k in the other. If, as you stated, your goal is to "remove isomorphic permutations," you will still get a list of sets to remove.
Explanation:
Assume that along with our sorted hash, we kept a record of which permutation each i came from. vsoftco's counter-example:
010212 // hash for A and B
100110 // origin permutation, set A
100110 // origin permutation, set B
In order to confirm isomorphism, we need to show that the i's grouped in each index from the first set moved to some index in the second set, which index does not matter. Sorting the groups of i's does not invalidate the solution, rather it serves to confirm movement/permutation between sets.
Now by definition, each number in a hash and each number in each group in the hash is represented in an origin permutation exactly one time for each set. However we choose to arrange the numbers in each group of i's in the hash, we are guaranteed that each number in that group is representing a different permutation in the set; and the moment we theoretically assign that number, we are guaranteed it is "reserved" for that permutation and index only. For a given number, say 2, in the two hashes, we are guaranteed that it comes from one index and permutation in set A, and in the second hash corresponds to one index and permutation in set B. That is all we really need to show - that the number in one index for each permutation in one set (a group of distinct i's) went to one index only in the other set (a group of distinct k's). Which permutation and index the number belongs to is irrelevant.
Remember that any set S2, isomorphic to set S1, can be derived from S1 using one permutation function or various combinations of different permutation functions applied to S1's members. What the sorting, or reordering, of our numbers and groups actually represents is the permutation we are choosing to assign as the solution to the isomorphism rather than an actual assignment of which number came from which index and permutation. Here is vsoftco's counter-example again, this time we will add the origin indexes of our hashes:
110022 // origin index set A
001122 // origin index set B
Therefore our permutation, a solution to the isomorphism, is:
Or, in order:
(Notice that in Jean Logeart's example there is more than one solution to the isomorphism.)
Suppose that two elements of s1, s2 \in S are isomorphic. Then if p1 and p2 are permutations, then s1 is isomorphic to s2 iff p1(s1) is isomorphic to p2(s2) where pi(si) is the set of permutations obtained by applying pi to every element in si.
For each i in 1...s and j in 1...k, choose the j-th member of si, and find the permutation that changes it to unity. Apply it to all the elements of si. Hash each of the k permutations to a number, obtaining k numbers, for any choice of i and j, at cost nk.
Comparing the hashed sets for two different values of i and j is k^2 < nk. Thus, you can find the set of candidate matches at cost s^2 k^3 n. If the actual number of matches is low, the overall complexity is far beneath what you specified in your question.
Take a0 in A. Then find it's inverse (fast, O(N)), call it a0inv. Then choose some i in B and define P_i = b_i * ainv and check that P_i * a generates B, when varying a over A. Do this for every i in B. If you don't find any i for which the relation holds, then the sets are not isomorphic. If you find such an i, then the sets are isomorphic. The runtime is O(K^2) for each pair of sets it checks, and you'd need to check O(S^2) sets, so you end up with O(S^2 * K^2 * N).
PS: I assumed here that by "maps A to B" you mean mapping under permutation composition, so P(a) is actually the permutation P composed with the permutation a, and I've used the fact that if P is a permutation, then there must exist an i for which Pa = b_i for some a.
EDIT I decided to undelete my answer as I am not convinced the previous one (#Jean Logeart) based on searching is correct. If yes, I'll gladly delete mine, as it performs worse, but I think I have a counterexample, see the comments below Jean's answer.
To check if two sets S₁ and S₂ are isomorphic you can do a much shorter search.
If they are isomorphic then there is a permutation t that maps each element of S₁ to an element of S₂; to find t you can just pick any fixed element p of S₁ and consider the permutations
t₁ = (1/p) q₁
t₂ = (1/p) q₂
t₃ = (1/p) q₃
...
for all elements q of S₂. For, if a valid t exists then it must map the element p to an element of S₂, so only permutations mapping p to an element of S₂ are possible candidates.
Moreover given a candidate t to check if two sets of permutations S₁t and S₂ are equal you could use an hash computed as the x-or of an hash code for each element, doing the full check of all the permutations only if the hash matches.

O(log n) algorithm to find the element having rank i in union of pre-sorted lists

Given two sorted lists, each containing n real numbers, is there a O(log n) time algorithm to compute the element of rank i (where i coresponds to index in increasing order) in the union of the two lists, assuming the elements of the two lists are distinct?
EDIT:
#BEN: This i s what I have been doing , but I am still not getting it.
I have an examples ;
List A : 1, 3, 5, 7
List B : 2, 4, 6, 8
Find rank(i) = 4.
First Step : i/2 = 2;
List A now contains is A: 1, 3
List B now contains is B: 2, 4
compare A[i] to B[i] i.e
A[i] is less;
So the lists now become :
A: 3
B: 2,4
Second Step:
i/2 = 1
List A now contains A:3
List B now contains B:2
NoW I HAVE LOST THE VALUE 4 which is actually the result ...
I know I am missing some thing , but even after close to a day of thinking I cant just figure this one out...
Yes:
You know the element lies within either index [0,i] of the first list or [0,i] of the second list. Take element i/2 from each list and compare. Proceed by bisection.
I'm not including any code because this problem sounds a lot like homework.
EDIT: Bisection is the method behind binary search. It works like this:
Assume i = 10; (zero-based indexing, we're looking for the 11th element overall).
On the first step, you know the answer is either in list1(0...10) or list2(0...10). Take a = list1(5) and b = list2(5).
If a > b, then there are 5 elements in list1 which come before a, and at least 6 elements in list2 which come before a. So a is an upper bound on the result. Likewise there are 5 elements in list2 which come before b and less than 6 elements in list1 which come before b. So b is an lower bound on the result. Now we know that the result is either in list1(0..5) or list2(5..10). If a < b, then the result is either in list1(5..10) or list2(0..5). And if a == b we have our answer (but the problem said the elements were distinct, therefore a != b).
We just repeat this process, cutting the size of the search space in half at each step. Bisection refers to the fact that we choose the middle element (bisector) out of the range we know includes the result.
So the only difference between this and binary search is that in binary search we compare to a value we're looking for, but here we compare to a value from the other list.
NOTE: this is actually O(log i) which is better (at least no worse than) than O(log n). Furthermore, for small i (perhaps i < 100), it would actually be fewer operations to merge the first i elements (linear search instead of bisection) because that is so much simpler. When you add in cache behavior and data locality, the linear search may well be faster for i up to several thousand.
Also, if i > n then rely on the fact that the result has to be toward the end of either list, your initial candidate range in each list is from ((i-n)..n)
Here is how you do it.
Let the first list be ListX and the second list be ListY. We need to find the right combination of ListX[x] and ListY[y] where x + y = i. Since x, y, i are natural numbers we can immediately constrain our problem domain to x*y. And by using the equations max(x) = len(ListX) and max(y) = len(ListY) we now have a subset of x*y elements in the form [x, y] that we need to search.
What you will do is order those elements like so [i - max(y), max(y)], [i - max(y) + 1, max(y) - 1], ... , [max(x), i - max(x)]. You will then bisect this list by choosing the middle [x, y] combination. Since the lists are ordered and distinct you can test ListX[x] < ListY[y]. If true then we bisect the upper half our [x, y] combinations or if false then we bisect the lower half. You will keep bisecting until find the right combination.
There are a lot of details I left, but that is the general gist of it. It is indeed O(log(n))!
Edit: As Ben pointed out this actually O(log(i)). If we let n = len(ListX) + len(ListY) then we know that i <= n.
When merging two lists, you're going to have to touch every element in both lists. If you don't touch every element, some elements will be left behind. Thus your theoretical lower bound is O(n). So you can't do it that way.
You don't have to sort, since you have two lists that are already sorted, and you can maintain that ordering as part of the merge.
edit: oops, I misread the question. I thought given value, you want to find rank, not the other way around. If you want to find rank given value, then this is how to do it in O(log N):
Yes, you can do this in O(log N), if the list allows O(1) random access (i.e. it's an array and not a linked list).
Binary search on L1
Binary search on L2
Sum the indices
You'd have to work out the math, +1, -1, what to do if element isn't found, etc, but that's the idea.

C/C++ implementation of an algorithm similar to subset sum

The problem is simpler than knapsack (or a type of it, without values and only positive weights). The problem consists of checking whether a number can be a combination of others. The function should return true or false.
For example,
112 and a list with { 17, 100, 101 } should return false, 469 with the same list should return true, 35 should return false, 119 should return true, etc...
Edit: subset sum problem would be more accurate for this than knapsack.
This is a special case of the Subset Sum problem, with sets that only contain one negative number (i.e., express 112 and { 17, 100, 101 } as { -112, 17, 100, 101 }). There's a few algorithms on the Wikipedia page, http://en.wikipedia.org/wiki/Subset_sum_problem.
An observation that will help you is that if your list is {a, b, c...} and the number you want to test is x, then x can be written as a sum of a sublist only if either x or x-a can be written as a sum of the sublist {b, c, ...}. This lets you write a very simple recursive algorithm to solve the problem.
edit: here is some code, taking into account the comments below. Not tested so probably buggy; and not necessarily the fastest. But for a small dataset it will get the job done neatly.
bool is_subset_sum(int x, std::list::const_iterator start, std::list::const_iterator end)
{
// for a 1-element list {a} we just need to test a|x
if (start == end) return (x % *start == 0);
// if x is small enough we don't need to bother testing x - a
if (x<a) return is_subset_sum (x, start+1, end);
// the default case. Note that the shortcut properties of || means the process ends as soon as we get a positive.
return (is_subset_sum (x, start+1, end) || is_subset_sum (x-a, start, end));
}
Note that positive results become denser as the queried number becomes larger. For example, all numbers greater than 100^2 can be generated by { 17, 100, 101 }. So the optimal algorithm may depend upon whether the queried number is much greater than the set's members. You might look into field theory.
At the least, you know the result is always false if the greatest common divisor of the set is not in the query, and that can be checked in negligible time.
If the number to reach is not too large, you can probably generate all the reachable numbers from the set that fall in the range [1,N].
Problem: Reach N using the elements in the list L, where N is small enough not to worry about a vector of size N elements' size.
Algorithm:
Generate a vector V of size N
For each element l in the list L
For each reachable element v in V
mark all elements v + n*l in V as reachable