In Random forest..each subset is taken randomly to build tree. But how can we gaurentee that we are taking a unique subset.
If my N = 2, 2 random samples to be generated to build the tree. As data is chosen randomly same data can enter into other subset. How can we make sure to do that .
Is there any mechanism for the same?
You don't.
There is no reason to make them disjoint. if you only have little data, you may need to consider 90% every time, for example.
Related
Hello I've been wondering what was the most efficient way to sort elements into "bins" in c++.
For example, we might have 3 bins:
Bin A accepts integers from 0-10
Bin B accepts integers from 5-20
Bin C accepts integers from 25-30
I want to be able to put an integer into the correct bin based on the range the bin accepts. If the int falls into the range of multiple bins, put it into the bin that is the least full. Finally, I want to be able to tell if the integer doesn't fit into any bin.
What is the most efficient way to do this?
It depends.
If the number of bins is small, a simple linear search is usually plenty efficient. Especially if you take into consideration the programmer's time.
If the set of integers is small and the bins don't often overlap, it can be done in constant time. We create a vector indexed by the integer, with as value another vector of (pointers to) the bins. So you can look up the list of applicable bins in constant time. You still need to loop through the applicable bins to find out which is the least full, so this is only constant time if the maximum number of overlapping bins is bounded by a constant.
If the set of integers is large but the bins themselves are relatively small, we can do something similar, but using a hash table instead of a vector.
If we can't make any assumptions, then an interval tree is probably the best solution. But its worst-case behaviour is still linear in the number of bins, because the bins might all overlap and would all need to be checked to find the one that is least full.
I'm working on a problem where I have an entire table from a database in memory at all times, with a low range and high range of 9-digit numbers. I'm given a 9-digit number that I need to use to lookup the rest of the columns in the table based on whether that number falls in the range. For example, if the range was 100,000,000 to 125,000,000 and I was given a number 117,123,456, then I would know that I'm in the 100-125 mil range, and whatever vector of data that points to is what I will be using.
Now the best I can think of for lookup time is log(n) run time. This is OK, at best, but still pretty slow. The table has at least 100,000 entries and I will need to look up values in this table tens-of-thousands, if not hundred-thousands of times, per execution of this application (10+ times/day).
So I was wondering if it was possible to use an unordered_set instead, writing my own Hash function that ALWAYS returns the same hash-value for every number in range. Using the same example above, 100,000,000 through 125,000,000 will always return, for example, a hash value of AB12CD. Then when I use the lookup value of 117,123,456, I will get that same AB12CD hash and have a lookup time of O(1).
Is this possible, and if so, any ideas how?
Thanks in advance.
Yes. Assuming that you can number your intervals in order, you could fit a polynomial to your cutoff values, and receive an index value from the polynomial. For instance, with cutoffs of 100,000,000, 125,000,000, 250,000,000, and 327,000,000, you could use points (100, 0), (125, 1), (250, 2), and (327, 3), restricting the first derivative to [0, 1]. Assuming that you have decently-behaved intervals, you'll be able to fit this with an (N+2)th-degree polynomial for N cutoffs.
Have a table of desired hash values; use floor[polynomial(i)] for the index into the table.
Can you write such a hash function? Yes. Will evaluating it be slower than a search? Well there's the catch...
I would personally solve this problem as follows. I'd have a sorted vector of all values. And then I'd have a jump table of indexes into that vector based on the value of n >> 8.
So now your logic is that you look in the jump table to figure out where you are jumping to and how many values you should consider. (Just look at where you land versus the next index to see the size of the range.) If the whole range goes to the same vector, you're done. If there are only a few entries, do a linear search to find where you belong. If they are a lot of entries, do a binary search. Experiment with your data to find when binary search beats a linear search.
A vague memory suggests that the tradeoff is around 100 or so because predicting a branch wrong is expensive. But that is a vague memory from many years ago, so run the experiment for yourself.
Problem: I need to sample from a discrete distribution constructed of certain weights e.g. {w1,w2,w3,..}, and thus probability distribution {p1,p2,p3,...}, where pi=wi/(w1+w2+...).
some of wi's change very frequently, but only a very low proportion of all wi's. But the distribution itself thus has to be renormalised every time it happens, and therefore I believe Alias method does not work efficiently because one would need to build the whole distribution from scratch every time.
The method I am currently thinking is a binary tree (heap method), where all wi's are saved in the lowest level, and then the sum of each two in higher level and so on. The sum of all of them will be in the highest level, which is also a normalisation constant. Thus in order to update the tree after change in wi, one needs to do log(n) changes, as well as the same amount to get the sample from the distribution.
Question:
Q1. Do you have a better idea on how to achieve it faster?
Q2. The most important part: I am looking for a library which has already done this.
explanation: I have done this myself several years ago, by building heap structure in a vector, but since then I have learned many things including discovering libraries ( :) ), and containers such as map... Now I need to rewrite that code with higher functionality, and I want to make it right this time:
so Q2.1 is there a nice way to make a c++ map ordered and searched not by index, but by a cumulative sum of it's elements (this is how we sample, right?..). (that is my current theory how I would like to do it, but it doesnt have to be this way...)
Q2.2 Maybe there is some even nicer way to do the same? I would believe this problem is so frequent that I am very surprised I could not find some sort of library which would do it for me...
Thank you very much, and I am very sorry if this has been asked in some other form, please direct me towards it, but I have spent a good while looking...
-z
Edit: There is a possibility that I might need to remove or add the elements as well, but I think I could avoid it, if that makes a huge difference, thus leaving only changing the value of the weights.
Edit2: weights are reals in general, I would have to think if I could make them integers...
I would actually use a hash set of strings (don't remember the C++ container for it, you might need to implement your own though). Put wi elements for each i, with the values "w1_1", "w1_2",... all through "w1_[w1]" (that is, w1 elements starting with "w1_").
When you need to sample, pick an element at random using a uniform distribution. If you picked w5_*, say you picked element 5. Because of the number of elements in the hash, this will give you the distribution you were looking for.
Now, when wi changes from A to B, just add B-A elements to the hash (if B>A), or remove the last A-B elements of wi (if A>B).
Adding new elements and removing old elements is trivial in this case.
Obviously the problem is 'pick an element at random'. If your hash is a closed hash, you pick an array cell at random, if it's empty - just pick one at random again. If you keep your hash 3 or 4 times larger than the total sum of weights, your complexity will be pretty good: O(1) for retrieving a random sample, O(|A-B|) for modifying the weights.
Another option, since only a small part of your weights change, is to split the weights into two - the fixed part and the changed part. Then you only need to worry about changes in the changed part, and the difference between the total weight of changed parts and the total weight of unchanged parts. Then for the fixed part your hash becomes a simple array of numbers: 1 appears w1 times, 2 appears w2 times, etc..., and picking a random fixed element is just picking a random number.
Updating your normalisation factor when you change a value is trivial. This might suggest an algorithm.
w_sum = w_sum_old - w_i_old + w_i_new;
If you leave p_i as a computed property p_i = w_i / w_sum you would avoid recalculating the entire p_i array at the cost of calculating p_i every time they are needed. You would, however, be able to update many statistical properties without recalculating the entire sum
expected_something = (something_1 * w_1 + something_2 * w_2 + ...) / w_sum;
With a bit of algebra you can update expected_something by subtracting the contribution with the old weight and add the contribution with the new weight, multiplying and dividing with the normalization factors as required.
If you during the sampling keep track of which outcomes that are part of the sample, it would be possible to propagate how the probabilities were updated to the generated sample. Would this make it possible for you to update rather than recalculate values related to the sample? I think a bitmap could provide an efficient way to store an index of which outcomes that were used to build the sample.
One way of storing the probabilities together with the sums is to start with all probabilities. In the next N/2 positions you store the sums of the pairs. After that N/4 sums of the pairs etc. Where the sums are located can, obviously, be calculate in O(1) time. This data-structure is sort of a heap, but upside down.
I have a sorted set (std::set to be precise) that contains elements with an assigned weight. I want to randomly choose N elements from this set, while the elements with higher weight should have a bigger probability of being chosen. Any element can be chosen multiple times.
I want to do this as efficiently as possible - I want to avoid any copying of the set (it might get very large) and run at O(N) time if it is possible. I'm using C++ and would like to stick to a STL + Boost only solution.
Does anybody know if there is a function in STL/Boost that performs this task? If not, how to implement one?
You need to calculate (and possibly cache, if you think of performance) the sum of all weights in your set. Then, generate N random numbers ranging up to this value. Finally, iterate your set, counting the sum of the weights you encountered so far. Inspect all the (remaining) random numbers. If the number falls between the previous and the next value of the sum, insert the value from the set and remove your random number. Stop when your list of random numbers is empty or you've reached the end of the set.
I don't know about any libraries, but it sounds like you have a weighted roulette wheel. Here's a reference with some pseudo-code, although the context is related to genetic algorithms: http://www.cse.unr.edu/~banerjee/selection.htm
As for "as efficiently as possible," that would depend on some characteristics of the data. In the application of the weighted roulette wheel, when searching for the index you could consider a binary search instead. However, it is not the case that each slot of the roulette wheel is equally likely, so it may make sense to examine them in order of their weights.
A lot depends on the amount of extra storage you're willing to expend to make the selection faster.
If you're not willing to use any extra storage, #Alex Emelianov's answer is pretty much what I was thinking of posting. If you're willing use some extra storage (and possibly a different data structure than std::set) you could create a tree (like a set uses) but at each node of the tree, you'd also store the (weighted) number of items to the left of that node. This will let you map from a generated number to the correct associated value with logarithmic (rather than linear) complexity.
I am writing a generic hash map in C++ which uses chaining to deal with collisions.
Say if I have a hash map with 11 buckets, and I insert 8 items. The hash function will distribute it as follows:
bucket[0] = empty
bucket[1] = 2 elements
bucket[2] = empty
bucket[3] = 1 element
bucket[4] = 1 element
bucket[5] = 3 elements
bucket[6] = empty
bucket[7] = 1 element
bucket[8] = empty
bucket[9] = empty
bucket[10] = empty
Calculating the spread over the buckets is 5/8 = 0.625.
But how do I calculate the spread taking the depth of the buckets into account?
I want to know this because:
Say if I added 20 elements, and every bucket has 1 element and the last bucket has 11 elements.
then the spread would be 1 if i calculate it the easy way, but this is obviously not correct! (the table resizes to avoid this of course, but I want to be able to show the spread) I want to use this information to be able to tune hash functions.
Thanks in advance!
If you're only using this to tune the hash functions themselves, you could compute a genuine measure of statistical dispersion, such as the Gini coefficient. On the other hand, if you're trying to make this a feature of the hash-map itself, I would recommend against it - computing a complicated benchmark as part of the 'is resize necessary' logic has its own performance costs; something naïve is probably better.
When I've worked on improving hash functions, I've use the sum of the squares of the lengths divided by the number of items inserted (and attempted to minimize the result). In your first example, you've inserted 8 items and the sum of the squares of the lengths is 16, so your "figure of merit" is 2.
In the second, you've inserted 20 items, and the sum of the squares is 130, so your figure of merit would be 6.5. I'd say the first was likely to be a better hash function in general (though I generally prefer to compare results from identical inputs).
You probably care about the answer because you want to know how much work you're doing with chaining. Thus, you probably should instrument your hash map to output how much work it's doing (a few #ifdefs that increment a counter in the key methods will probably do the trick). You then can use the amount of work (# compares, #nodes followed, etc.) as a metric for your hash function, and as a bonus you get a nifty tool for performance tuning. Once you figure things out, you can remove the instrumentation.