Converting sets of integers into ranges - c++

What's the most idiomatic way to convert a set of integers into a set of ranges?
E.g. given the set {0, 1, 2, 3, 4, 7, 8, 9, 11} I want to get { {0,4}, {7,9}, {11,11} }.
Let's say we are converting from std::set<int> into std::vector<std::pair<int, int>>.
I treat Ranges as inclusive on both sides, since it's more convenient in my case, but I can work with open-ended ranges too if necessary.
I've written the following function, but I feel like reinventing the wheel.
Please tell maybe there's something in STL or boost for this.
typedef std::pair<int, int> Range;
void setToRanges(const std::set<int>& indices, std::vector<Range>& ranges)
{
Range r = std::make_pair(-INT_MAX, -INT_MAX);
BOOST_FOREACH(int i, indices)
{
if (i != r.second + 1)
{
if (r.second >= 0) ranges.push_back(r);
r.first = i;
}
r.second = i;
}
ranges.push_back(r);
}

Now one can use interval_set from Boost.ICL (Boost > 1.46)
#include <set>
#include <iostream>
#include <algorithm>
#include <boost/icl/discrete_interval.hpp>
#include <boost/icl/closed_interval.hpp>
#include <boost/icl/interval_set.hpp>
typedef std::set<int> Set;
typedef boost::icl::interval_set<int> IntervalSet;
void setToInterval(const Set& indices, IntervalSet& intervals)
{
Set::const_iterator pos;
for(pos = indices.begin(); pos != indices.end(); ++pos)
{
intervals.insert(boost::icl::construct<boost::icl::discrete_interval<int> >(*pos, *pos, boost::icl::interval_bounds::closed()));
}
}
int main()
{
std::cout << ">>Interval Container Library Rocks! <<\n";
std::cout << "----------------------------------------------------\n";
Set indices = {0, 1, 2, 3, 4, 7, 8, 9, 11};
IntervalSet intervals;
setToInterval(indices, intervals);
std::cout << " intervals joined: " << intervals << "\n";
return 0;
}
Output:
intervals joined: {[0,4][7,9][11,11]}

I don't think there's anything in the STL or Boost that does this.
One thing you can do is to make your algorithm a little bit more general:
template<class InputIterator, class OutputIterator>
void setToRanges(InputIterator first, InputIterator last, OutputIterator dest)
{
typedef std::iterator_traits<InputIterator>::value_type item_type;
typedef typename std::pair<item_type, item_type> pair_type;
pair_type r(-std::numeric_limits<item_type>::max(),
-std::numeric_limits<item_type>::max());
for(; first != last; ++first)
{
item_type i = *first;
if (i != r.second + 1)
{
if (r.second >= 0)
*dest = r;
r.first = i;
}
r.second = i;
}
*dest = r;
}
Usage:
std::set<int> set;
// insert items
typedef std::pair<int, int> Range;
std::vector<Range> ranges;
setToRanges(set.begin(), set.end(), std::back_inserter(ranges));
You should also consider using the term interval instead of range, because the latter in STL parlance means "any sequence of objects that can be accessed through iterators or pointers" (source).
Finally, you should probably take at look at the Boost Interval Arithmetic Library, which is currently under review for Boost inclusion.

No shrinkwrapped solution I'm afraid, but an alternative algorithm.
Store your items in a bitvector - O(n) if you know the maximum item at the start and preallocate the vector.
Translate that vector into a vector of transition point flags - exclusive-or the bitvector with a bitshifted version of itself. Slightly fiddly at the word boundaries, but still O(n). Logically, you get a new key at the old max + 1 (the transition back to zeros after all your keys are exhausted), so it's a good idea to allow for that in the preallocation of the vector.
Then, iterate through the bitvector finding the set bits. The first set bit indicates the start of a range, the second the end, the third the start of the next range and so on. The following bit-fiddling function (assuming 32 bit int) may be useful...
int Low_Bit_No (unsigned int p)
{
if (p == 0) return -1; // No bits set
int l_Result = 31;
unsigned int l_Range = 0xffffffff;
unsigned int l_Mask = 0x0000ffff;
if (p & l_Mask) { l_Result -= 16; } else { l_Mask ^= l_Range; }
l_Range &= l_Mask;
l_Mask &= 0x00ff00ff;
if (p & l_Mask) { l_Result -= 8; } else { l_Mask ^= l_Range; }
l_Range &= l_Mask;
l_Mask &= 0x0f0f0f0f;
if (p & l_Mask) { l_Result -= 4; } else { l_Mask ^= l_Range; }
l_Range &= l_Mask;
l_Mask &= 0x33333333;
if (p & l_Mask) { l_Result -= 2; } else { l_Mask ^= l_Range; }
l_Mask &= 0x55555555;
if (p & l_Mask) { l_Result -= 1; }
return l_Result;
}

I'd use adjacent_find with a predicate that defines "adjacency" as two elements that are not sequential. This solution doesn't depend on INT_MAX. Still feels kinda clunky.
bool notSequential(int a, int b) { return (a + 1) != b; }
void setToRanges(const std::set<int>& indices, std::vector<Range>& ranges)
{
std::set<int>::iterator iter = indices.begin();
std::set<int>::iterator end = indices.end();
int first;
while (iter != end)
{
first = *iter;
iter = std::adjacent_find(iter, end, notSequential);
if (iter != end)
{
ranges.push_back(std::make_pair(first, *iter));
++iter;
}
}
ranges.push_back(std::make_pair(first, *--iter));
}
That tests against end more than necessary. adjacent_find can never return the last element of a list, so the incremented iterator will never be end and thus can still be dereferenced. It could be rewritten as:
void setToRanges(const std::set<int>& indices, std::vector<Range>& ranges)
{
std::set<int>::iterator iter = indices.begin();
std::set<int>::iterator end = indices.end();
if (iter == end) return; // empty set has no ranges
int first;
while (true)
{
first = *iter;
iter = std::adjacent_find(iter, end, notSequential);
if (iter == end) break;
ranges.push_back(std::make_pair(first, *iter++));
}
ranges.push_back(std::make_pair(first, *--iter));
}

Related

How can I approach this CP task?

The task (from a Bulgarian judge, click on "Език" to change it to English):
I am given the size of the first (S1 = A) of N corals. The size of every subsequent coral (Si, where i > 1) is calculated using the formula (B*Si-1 + C)%D, where A, B, C and D are some constants. I am told that Nemo is nearby the Kth coral (when the sizes of all corals are sorted in ascending order).
What is the size of the above-mentioned Kth coral ?
I will have T tests and for every one of them I will be given N, K, A, B, C and D and prompted to output the size of the Kth coral.
The requirements:
1 ≤ T ≤ 3
1 ≤ K ≤ N ≤ 107
0 ≤ A < D ≤ 1018
1 ≤ C, B*D ≤ 1018
Memory available is 64 MB
Time limit is 1.9 sec
The problem I have:
For the worst case scenario I will need 107*8B which is 76 MB.
The solution If the memory available was at least 80 MB would be:
#include <iostream>
#include <vector>
#include <iterator>
#include <algorithm>
using biggie = long long;
int main() {
int t;
std::cin >> t;
int i, n, k, j;
biggie a, b, c, d;
std::vector<biggie>::iterator it_ans;
for (i = 0; i != t; ++i) {
std::cin >> n >> k >> a >> b >> c >> d;
std::vector<biggie> lut{ a };
lut.reserve(n);
for (j = 1; j != n; ++j) {
lut.emplace_back((b * lut.back() + c) % d);
}
it_ans = std::next(lut.begin(), k - 1);
std::nth_element(lut.begin(), it_ans, lut.end());
std::cout << *it_ans << '\n';
}
return 0;
}
Question 1: How can I approach this CP task given the requirements listed above ?
Question 2: Is it somehow possible to use std::nth_element to solve it since I am not able to store all N elements ? I mean using std::nth_element in a sliding window technique (If this is possible).
# Christian Sloper
#include <iostream>
#include <queue>
using biggie = long long;
int main() {
int t;
std::cin >> t;
int i, n, k, j, j_lim;
biggie a, b, c, d, prev, curr;
for (i = 0; i != t; ++i) {
std::cin >> n >> k >> a >> b >> c >> d;
if (k < n - k + 1) {
std::priority_queue<biggie, std::vector<biggie>, std::less<biggie>> q;
q.push(a);
prev = a;
for (j = 1; j != k; ++j) {
curr = (b * prev + c) % d;
q.push(curr);
prev = curr;
}
for (; j != n; ++j) {
curr = (b * prev + c) % d;
if (curr < q.top()) {
q.pop();
q.push(curr);
}
prev = curr;
}
std::cout << q.top() << '\n';
}
else {
std::priority_queue<biggie, std::vector<biggie>, std::greater<biggie>> q;
q.push(a);
prev = a;
for (j = 1, j_lim = n - k + 1; j != j_lim; ++j) {
curr = (b * prev + c) % d;
q.push(curr);
prev = curr;
}
for (; j != n; ++j) {
curr = (b * prev + c) % d;
if (curr > q.top()) {
q.pop();
q.push(curr);
}
prev = curr;
}
std::cout << q.top() << '\n';
}
}
return 0;
}
This gets accepted (Succeeds all 40 tests. Largest time 1.4 seconds, for a test with T=3 and D≤10^9. Largest time for a test with larger D (and thus T=1) is 0.7 seconds.).
#include <iostream>
using biggie = long long;
int main() {
int t;
std::cin >> t;
int i, n, k, j;
biggie a, b, c, d;
for (i = 0; i != t; ++i) {
std::cin >> n >> k >> a >> b >> c >> d;
biggie prefix = 0;
for (int shift = d > 1000000000 ? 40 : 20; shift >= 0; shift -= 20) {
biggie prefix_mask = ((biggie(1) << (40 - shift)) - 1) << (shift + 20);
int count[1 << 20] = {0};
biggie s = a;
int rank = 0;
for (j = 0; j != n; ++j) {
biggie s_vs_prefix = s & prefix_mask;
if (s_vs_prefix < prefix)
++rank;
else if (s_vs_prefix == prefix)
++count[(s >> shift) & ((1 << 20) - 1)];
s = (b * s + c) % d;
}
int i = -1;
while (rank < k)
rank += count[++i];
prefix |= biggie(i) << shift;
}
std::cout << prefix << '\n';
}
return 0;
}
The result is a 60 bits number. I first determine the high 20 bits with one pass through the numbers, then the middle 20 bits in another pass, then the low 20 bits in another.
For the high 20 bits, generate all the numbers and count how often each high 20 bits pattern occurrs. After that, add up the counts until you reach K. The pattern where you reach K, that pattern covers the K-th largest number. In other words, that's the result's high 20 bits.
The middle and low 20 bits are computed similarly, except we take the by then known prefix (the high 20 bits or high+middle 40 bits) into account. As a little optimization, when D is small, I skip computing the high 20 bits. That got me from 2.1 seconds down to 1.4 seconds.
This solution is like user3386109 described, except with bucket size 2^20 instead of 10^6 so I can use bit operations instead of divisions and think of bit patterns instead of ranges.
For the memory constraint you hit:
(B*Si-1 + C)%D
requires only the value (Si-2) before itself. So you can compute them in pairs, to use only 1/2 of total you need. This only needs indexing even values and iterating once for odd values. So you can just use half-length LUT and compute the odd value in-flight. Modern CPUs are fast enough to do extra calculations like these.
std::vector<biggie> lut{ a_i,a_i_2,a_i_4,... };
a_i_3=computeOddFromEven(lut[1]);
You can make a longer stride like 4,8 too. If dataset is large, RAM latency is big. So it's like having checkpoints in whole data search space to balance between memory and core usage. 1000-distance checkpoints would put a lot of cpu cycles into re-calculations but then the array would fit CPU's L2/L1 cache which is not bad. When sorting, the maximum re-calc iteration per element would be n=1000 now. O(1000 x size) maybe it's a big constant but maybe somehow optimizable by compiler if some constants really const?
If CPU performance becomes problem again:
write a compiling function that writes your source code with all the "constant" given by user to a string
compile the code using command-line (assuming target computer has some accessible from command line like g++ from main program)
run it and get results
Compiler should enable more speed/memory optimizations when those are really constant in compile-time rather than depending on std::cin.
If you really need to add a hard-limit to the RAM usage, then implement a simple cache with the backing-store as your heavy computations with brute-force O(N^2) (or O(L x N) with checkpoints every L elements as in first method where L=2 or 4, or ...).
Here's a sample direct-mapped cache with 8M long-long value space:
int main()
{
std::vector<long long> checkpoints = {
a_0, a_16, a_32,...
};
auto cacheReadMissFunction = [&](int key){
// your pure computational algorithm here, helper meant to show variable
long long result = checkpoints[key/16];
for(key - key%16 times)
result = iterate(result);
return result;
};
auto cacheWriteMissFunction = [&](int key, long long value){
/* not useful for your algorithm as it doesn't change behavior per element */
// backing_store[key] = value;
};
// due to special optimizations, size has to be 2^k
int cacheSize = 1024*1024*8;
DirectMappedCache<int, long long> cache(cacheSize,cacheReadMissFunction,cacheWriteMissFunction);
std::cout << cache.get(20)<<std::endl;
return 0;
}
If you use a cache-friendly sorting-algorithm, a direct cache access would make a lot of re-use for nearly all the elements in comparisons if you fill the output buffer/terminal with elements one by one by following something like a bitonic-sort-path (that is known in compile-time). If that doesn't work, then you can try accessing files as a "backing-store" of cache for sorting whole array at once. Is file system prohibited for use? Then the online-compiling method above won't work either.
Implementation of a direct mapped cache (don't forget to call flush() after your algorithm finishes, if you use any cache.set() method):
#ifndef DIRECTMAPPEDCACHE_H_
#define DIRECTMAPPEDCACHE_H_
#include<vector>
#include<functional>
#include<mutex>
#include<iostream>
/* Direct-mapped cache implementation
* Only usable for integer type keys in range [0,maxPositive-1]
*
* CacheKey: type of key (only integers: int, char, size_t)
* CacheValue: type of value that is bound to key (same as above)
*/
template< typename CacheKey, typename CacheValue>
class DirectMappedCache
{
public:
// allocates buffers for numElements number of cache slots/lanes
// readMiss: cache-miss for read operations. User needs to give this function
// to let the cache automatically get data from backing-store
// example: [&](MyClass key){ return redis.get(key); }
// takes a CacheKey as key, returns CacheValue as value
// writeMiss: cache-miss for write operations. User needs to give this function
// to let the cache automatically set data to backing-store
// example: [&](MyClass key, MyAnotherClass value){ redis.set(key,value); }
// takes a CacheKey as key and CacheValue as value
// numElements: has to be integer-power of 2 (e.g. 2,4,8,16,...)
DirectMappedCache(CacheKey numElements,
const std::function<CacheValue(CacheKey)> & readMiss,
const std::function<void(CacheKey,CacheValue)> & writeMiss):size(numElements),sizeM1(numElements-1),loadData(readMiss),saveData(writeMiss)
{
// initialize buffers
for(size_t i=0;i<numElements;i++)
{
valueBuffer.push_back(CacheValue());
isEditedBuffer.push_back(0);
keyBuffer.push_back(CacheKey()-1);// mapping of 0+ allowed
}
}
// get element from cache
// if cache doesn't find it in buffers,
// then cache gets data from backing-store
// then returns the result to user
// then cache is available from RAM on next get/set access with same key
inline
const CacheValue get(const CacheKey & key) noexcept
{
return accessDirect(key,nullptr);
}
// only syntactic difference
inline
const std::vector<CacheValue> getMultiple(const std::vector<CacheKey> & key) noexcept
{
const int n = key.size();
std::vector<CacheValue> result(n);
for(int i=0;i<n;i++)
{
result[i]=accessDirect(key[i],nullptr);
}
return result;
}
// thread-safe but slower version of get()
inline
const CacheValue getThreadSafe(const CacheKey & key) noexcept
{
std::lock_guard<std::mutex> lg(mut);
return accessDirect(key,nullptr);
}
// set element to cache
// if cache doesn't find it in buffers,
// then cache sets data on just cache
// writing to backing-store only happens when
// another access evicts the cache slot containing this key/value
// or when cache is flushed by flush() method
// then returns the given value back
// then cache is available from RAM on next get/set access with same key
inline
void set(const CacheKey & key, const CacheValue & val) noexcept
{
accessDirect(key,&val,1);
}
// thread-safe but slower version of set()
inline
void setThreadSafe(const CacheKey & key, const CacheValue & val) noexcept
{
std::lock_guard<std::mutex> lg(mut);
accessDirect(key,&val,1);
}
// use this before closing the backing-store to store the latest bits of data
void flush()
{
try
{
std::lock_guard<std::mutex> lg(mut);
for (size_t i=0;i<size;i++)
{
if (isEditedBuffer[i] == 1)
{
isEditedBuffer[i]=0;
auto oldKey = keyBuffer[i];
auto oldValue = valueBuffer[i];
saveData(oldKey,oldValue);
}
}
}catch(std::exception &ex){ std::cout<<ex.what()<<std::endl; }
}
// direct mapped access
// opType=0: get
// opType=1: set
CacheValue const accessDirect(const CacheKey & key,const CacheValue * value, const bool opType = 0)
{
// find tag mapped to the key
CacheKey tag = key & sizeM1;
// compare keys
if(keyBuffer[tag] == key)
{
// cache-hit
// "set"
if(opType == 1)
{
isEditedBuffer[tag]=1;
valueBuffer[tag]=*value;
}
// cache hit value
return valueBuffer[tag];
}
else // cache-miss
{
CacheValue oldValue = valueBuffer[tag];
CacheKey oldKey = keyBuffer[tag];
// eviction algorithm start
if(isEditedBuffer[tag] == 1)
{
// if it is "get"
if(opType==0)
{
isEditedBuffer[tag]=0;
}
saveData(oldKey,oldValue);
// "get"
if(opType==0)
{
const CacheValue && loadedData = loadData(key);
valueBuffer[tag]=loadedData;
keyBuffer[tag]=key;
return loadedData;
}
else /* "set" */
{
valueBuffer[tag]=*value;
keyBuffer[tag]=key;
return *value;
}
}
else // not edited
{
// "set"
if(opType == 1)
{
isEditedBuffer[tag]=1;
}
// "get"
if(opType == 0)
{
const CacheValue && loadedData = loadData(key);
valueBuffer[tag]=loadedData;
keyBuffer[tag]=key;
return loadedData;
}
else // "set"
{
valueBuffer[tag]=*value;
keyBuffer[tag]=key;
return *value;
}
}
}
}
private:
const CacheKey size;
const CacheKey sizeM1;
std::mutex mut;
std::vector<CacheValue> valueBuffer;
std::vector<unsigned char> isEditedBuffer;
std::vector<CacheKey> keyBuffer;
const std::function<CacheValue(CacheKey)> loadData;
const std::function<void(CacheKey,CacheValue)> saveData;
};
#endif /* DIRECTMAPPEDCACHE_H_ */
You can solve this problem using a Max-heap.
Insert the first k elements into the max-heap. The largest element of these k will now be at the root.
For each remaining element e:
Compare e to the root.
If e is larger than the root, discard it.
If e is smaller than the root, remove the root and insert e into the heap structure.
After all elements have been processed, the k-th smallest element is at the root.
This method uses O(K) space and O(n log n) time.
There’s an algorithm that people often call LazySelect that I think would be perfect here.
With high probability, we make two passes. In the first pass, we save a random sample of size n much less than N. The answer will be around index (K/N)n in the sorted sample, but due to the randomness, we have to be careful. Save the values a and b at (K/N)n ± r instead, where r is the radius of the window. In the second pass, we save all of the values in [a, b], count the number of values less than a (let it be L), and select the value with index K−L if it’s in the window (otherwise, try again).
The theoretical advice on choosing n and r is fine, but I would be pragmatic here. Choose n so that you use most of the available memory; the bigger the sample, the more informative it is. Choose r fairly large as well, but not quite as aggressively due to the randomness.
C++ code below. On the online judge, it’s faster than Kelly’s (max 1.3 seconds on the T=3 tests, 0.5 on the T=1 tests).
#include <algorithm>
#include <cmath>
#include <cstdint>
#include <iostream>
#include <limits>
#include <optional>
#include <random>
#include <vector>
namespace {
class LazySelector {
public:
static constexpr std::int32_t kTargetSampleSize = 1000;
explicit LazySelector() { sample_.reserve(1000000); }
void BeginFirstPass(const std::int32_t n, const std::int32_t k) {
sample_.clear();
mask_ = n / kTargetSampleSize;
mask_ |= mask_ >> 1;
mask_ |= mask_ >> 2;
mask_ |= mask_ >> 4;
mask_ |= mask_ >> 8;
mask_ |= mask_ >> 16;
}
void FirstPass(const std::int64_t value) {
if ((gen_() & mask_) == 0) {
sample_.push_back(value);
}
}
void BeginSecondPass(const std::int32_t n, const std::int32_t k) {
sample_.push_back(std::numeric_limits<std::int64_t>::min());
sample_.push_back(std::numeric_limits<std::int64_t>::max());
const double p = static_cast<double>(sample_.size()) / n;
const double radius = 2 * std::sqrt(sample_.size());
const auto lower =
sample_.begin() + std::clamp<std::int32_t>(std::floor(p * k - radius),
0, sample_.size() - 1);
const auto upper =
sample_.begin() + std::clamp<std::int32_t>(std::ceil(p * k + radius), 0,
sample_.size() - 1);
std::nth_element(sample_.begin(), upper, sample_.end());
std::nth_element(sample_.begin(), lower, upper);
lower_ = *lower;
upper_ = *upper;
sample_.clear();
less_than_lower_ = 0;
equal_to_lower_ = 0;
equal_to_upper_ = 0;
}
void SecondPass(const std::int64_t value) {
if (value < lower_) {
++less_than_lower_;
} else if (upper_ < value) {
} else if (value == lower_) {
++equal_to_lower_;
} else if (value == upper_) {
++equal_to_upper_;
} else {
sample_.push_back(value);
}
}
std::optional<std::int64_t> Select(std::int32_t k) {
if (k < less_than_lower_) {
return std::nullopt;
}
k -= less_than_lower_;
if (k < equal_to_lower_) {
return lower_;
}
k -= equal_to_lower_;
if (k < sample_.size()) {
const auto kth = sample_.begin() + k;
std::nth_element(sample_.begin(), kth, sample_.end());
return *kth;
}
k -= sample_.size();
if (k < equal_to_upper_) {
return upper_;
}
return std::nullopt;
}
private:
std::default_random_engine gen_;
std::vector<std::int64_t> sample_ = {};
std::int32_t mask_ = 0;
std::int64_t lower_ = std::numeric_limits<std::int64_t>::min();
std::int64_t upper_ = std::numeric_limits<std::int64_t>::max();
std::int32_t less_than_lower_ = 0;
std::int32_t equal_to_lower_ = 0;
std::int32_t equal_to_upper_ = 0;
};
} // namespace
int main() {
int t;
std::cin >> t;
for (int i = t; i > 0; --i) {
std::int32_t n;
std::int32_t k;
std::int64_t a;
std::int64_t b;
std::int64_t c;
std::int64_t d;
std::cin >> n >> k >> a >> b >> c >> d;
std::optional<std::int64_t> ans = std::nullopt;
LazySelector selector;
do {
{
selector.BeginFirstPass(n, k);
std::int64_t s = a;
for (std::int32_t j = n; j > 0; --j) {
selector.FirstPass(s);
s = (b * s + c) % d;
}
}
{
selector.BeginSecondPass(n, k);
std::int64_t s = a;
for (std::int32_t j = n; j > 0; --j) {
selector.SecondPass(s);
s = (b * s + c) % d;
}
}
ans = selector.Select(k - 1);
} while (!ans);
std::cout << *ans << '\n';
}
}

Binary Search Vector for Closest Value C++

Like the title says I am trying to use a binary search method to search a sorted vector for the closest given value and return its index. I have attempted to use lower/upper_bound() but the returned value is either the first or last vector value, or "0". Below is the txt file which i have read the temp and voltage into vectors.
1.4 1.644290 -12.5
1.5 1.642990 -13.6
1.6 1.641570 -14.8
1.7 1.640030 -16.0
1.8 1.638370 -17.1
This is my current linear search that works
double Convert::convertmVtoK(double value) const
{
assert(!mV.empty());
auto it = std::min_element(mV.begin(), mV.end(), [value] (double a, double b) {
return std::abs(value - a) < std::abs(value - b);
});
assert(it != mV.end());
int index = std::distance(mV.begin(), it);
std::cout<<kelvin[index];
return kelvin[index];
}
This is the algorithm I am currently trying to get working to improve performance.
double Convert::convertmVtoK(double value)
{
auto it = lower_bound(mV.begin(), mV.end(), value);
if (it == mV.begin())
{
it = mV.begin();
}
else
{
--it;
}
auto jt = upper_bound(mV.begin(), mV.end(), value), out = it;
if (it == mV.end() || jt != mV.end() && value - *it > *jt - value)
{
out = jt;
}
cout<<"This is conversion mV to K"<<" "<< *out;
Any suggestions would be much appreciated. I believe the issue may lie with the vector being sorted in descending order but i need the order to remain the same in order to compare the values.
SOLVED thanks to #John. For anyone who needs this in the future here is what works.
double Convert::convertmVtoK(double value) const
{
auto it = lower_bound(mV.begin(), mV.end(), value, [](double a, double b){ return a > b; });
int index = std::distance(mV.begin(), it);
std::cout<<kelvin[index];
}
Since you have a non-increasing range (sorted in descending order), you can use std::lower_bound with a greater than operator, as mentioned in comments. However, this only gets you the first result past or equal to your number. It doesn't mean it's the "closest", which is what you asked for.
Instead, I would use std::upper_bound, so you don't have to check for equality (on double just to make it worse) and then drop back one to get the other bounding data point, and compute which one is actually closer. Along with some boundary checks:
#include <vector>
#include <algorithm>
#include <iostream>
#include <functional>
#include <iterator>
// for nonincreasing range of double, find closest to value, return its index
int index_closest(std::vector<double>::iterator begin, std::vector<double>::iterator end, double value) {
if (begin == end){
// we're boned
throw std::exception("index_closest has no valid index to return");
}
auto it = std::upper_bound(begin, end, value, std::greater<double>());
// first member is closest
if (begin == it)
return 0;
// last member is closest. end is one past that.
if (end == it)
return std::distance(begin, end) - 1;
// between two, need to see which is closer
double diff1 = abs(value - *it);
double diff2 = abs(value - *(it-1));
if (diff2 < diff1)
--it;
return std::distance(begin, it);
}
int main()
{
std::vector<double> data{ -12.5, -13.6, -14.8, -16.0, -17.1 };
for (double value = -12.0; value > -18.99; value = value - 1.0) {
int index = index_closest(data.begin(), data.end(), value);
std::cout << value << " is closest to " << data[index] << " at index " << index << std::endl;
}
}
output
-12 is closest to -12.5 at index 0
-13 is closest to -12.5 at index 0
-14 is closest to -13.6 at index 1
-15 is closest to -14.8 at index 2
-16 is closest to -16 at index 3
-17 is closest to -17.1 at index 4
-18 is closest to -17.1 at index 4
Note that, e.g. -14 is closer to -13.6 than -14.8, as a specific counterexample to your current working point. Also note the importance of inputs at both end points.
From there you are welcome to take kelvin[i]. I wasn't happy with using an external data array for the function's return value when you don't need to do that, so I just returned the index.
You might use the following to get the iterator with closest value:
auto FindClosest(const std::vector<double>& v, double value)
{
// assert(std::is_sorted(v.begin(), v.end(), std::greater<>{}));
auto it = std::lower_bound(v.begin(), v.end(), value, std::greater<>{});
if (it == v.begin()) {
return it;
} else if (it == v.end()) {
return it - 1;
} else {
return std::abs(value - *it) < std::abs(value - *(it - 1)) ?
it : it - 1;
}
}
This method works but am not 100% sure it always gives closest value. Incorporated part of #KennyOstrom 's method.
double Convert::convertmVtoK(double value) const
{
auto it = lower_bound(mV.begin(), mV.end(), value, [](double a, double b){ return a > b; });
int index = std::distance(mV.begin(), it);
if(value>mV[0] || value < mV.back())
{
std::cout<<"Warning: Voltage Out of Range"<<"\n";
}
else if(value==mV[0] || value==mV.back()
||fabs(value - mV[index]) <= 0.0001 * fabs(value))
{
std::cout<<kelvin[index];
return kelvin[index];
}
else
{
double diff1 = std::abs(value - mV[index]);
double diff2 = std::abs(value - mV[index-1]);
if (diff2 < diff1)
{
std::cout<<kelvin[index-1];
return kelvin[index-1];
}
else
{
std::cout<<kelvin[index];
return kelvin[index];
}
}
}

Averaging and decreasing the array (vector) C++

I've got an array (actually std::vector) size ~ 7k elements.
If you draw this data, there will be a diagram of the combustion of the fuel. But I want to minimize this vector from 7k elements to 721 (every 0.5 degree) elements or ~ 1200 (every 0.3 degree). Of course I want save diagram the same. How can I do it?
Now I am getting every 9 element from big vector to new and cutting other evenly from front and back of vector to get 721 size.
QVector <double> newVMTVector;
for(QVector <double>::iterator itv = oldVmtDataVector.begin(); itv < oldVmtDataVector.end() - 9; itv+=9){
newVMTVector.push_back(*itv);
}
auto useless = newVMTVector.size() - 721;
if(useless%2 == 0){
newVMTVector.erase(newVMTVector.begin(), newVMTVector.begin() + useless/2);
newVMTVector.erase(newVMTVector.end() - useless/2, newVMTVector.end());
}
else{
newVMTVector.erase(newVMTVector.begin(), newVMTVector.begin() + useless/2+1);
newVMTVector.erase(newVMTVector.end() - useless/2, newVMTVector.end());
}
newVMTVector.squeeze();
oldVmtDataVector.clear();
oldVmtDataVector = newVMTVector;
I can swear there is an algorithm that averages and reduces the array.
The way I understand it you want to pick the elements [0, k, 2k, 3k ... ] where n is 10 or n is 6.
Here's a simple take:
template <typename It>
It strided_inplace_reduce(It it, It const last, size_t stride) {
It out = it;
if (stride < 1) return last;
while (it < last)
{
*out++ = *it;
std::advance(it, stride);
}
return out;
}
Generalizing a bit for non-random-access iterators:
Live On Coliru
#include <iterator>
namespace detail {
// version for random access iterators
template <typename It>
It strided_inplace_reduce(It it, It const last, size_t stride, std::random_access_iterator_tag) {
It out = it;
if (stride < 1) return last;
while (it < last)
{
*out++ = *it;
std::advance(it, stride);
}
return out;
}
// other iterator categories
template <typename It>
It strided_inplace_reduce(It it, It const last, size_t stride, ...) {
It out = it;
if (stride < 1) return last;
while (it != last) {
*out++ = *it;
for (size_t n = stride; n && it != last; --n)
{
it = std::next(it);
}
}
return out;
}
}
template <typename Range>
auto strided_inplace_reduce(Range& range, size_t stride) {
using std::begin;
using std::end;
using It = decltype(begin(range));
It it = begin(range), last = end(range);
return detail::strided_inplace_reduce(it, last, stride, typename std::iterator_traits<It>::iterator_category{});
}
#include <vector>
#include <list>
#include <iostream>
int main() {
{
std::vector<int> v { 1,2,3,4,5,6,7,8,9 };
v.erase(strided_inplace_reduce(v, 2), v.end());
std::copy(v.begin(), v.end(), std::ostream_iterator<int>(std::cout << "\nv: ", " "));
}
{
std::list<int> l { 1,2,3,4,5,6,7,8,9 };
l.erase(strided_inplace_reduce(l, 4), l.end());
std::copy(l.begin(), l.end(), std::ostream_iterator<int>(std::cout << "\nl: ", " "));
}
}
Prints
v: 1 3 5 7 9
l: 1 5 9
What you need is an interpolation. There are many libraries providing many types of interpolation. This one is very lightweight and easy to setup and run:
http://kluge.in-chemnitz.de/opensource/spline/
All you need to do is create the second vector that contains the X values, pass both vectors to generate spline, and generate interpolated results every 0.5 degrees or whatever:
std::vector<double> Y; // Y is your current vector of fuel combustion values with ~7k elements
std::vector<double> X;
X.reserve(Y.size());
double step_x = 360 / (double)Y.size();
for (int i = 0; i < X.size(); ++i)
X[i] = i*step_x;
tk::spline s;
s.set_points(X, Y);
double interpolation_step = 0.5;
std::vector<double> interpolated_results;
interpolated_results.reserve(std::ceil(360/interpolation_step) + 1);
for (double i = 0.0, int j = 0; i <= 360; i += interpolation_step, ++j) // <= in order to obtain range <0;360>
interpolated_results[j] = s(i);
if (fmod(360, interpolation_step) != 0.0) // for steps that don't divide 360 evenly, e.g. 0.7 deg, we need to close the range
interpolated_results.back() = s(360);
// now interpolated_results contain values every 0.5 degrees
This should give you and idea how to use this kind of libraries. If you need some other interpolation type, just find the one that suits your needs. The usage should be similar.

Best way to to average duplicate properties in C++ vector

I have a std::vector<PLY> that holds a number of structs:
struct PLY {
int x;
int y;
int greyscale;
}
Some of the PLY's could be duplicates in terms of their position x and y but not necessarily in terms of their greyscale value. What is the best way to find those (position-) duplicates and replace them with a single PLY instace which has a greyscale value that represents the average greyscale of all duplicates?
E.g: PLY a{1,1,188} is a duplicate of PLY b{1,1,255}. Same (x,y) position possibly different greyscale.
Based on your description of Ply you need these operators:
auto operator==(const Ply& a, const Ply& b)
{
return a.x == b.x && a.y == b.y;
}
auto operator<(const Ply& a, const Ply& b)
{
// whenever you can be lazy!
return std::make_pair(a.x, a.y) < std::make_pair(b.x, b.y);
}
Very important: if the definition "Two Ply are identical if their x and y are identical" is not general valid, then defining comparator operators that ignore greyscale is a bad ideea. In that case you should define separate function objects or non-operator functions and pass them around to function.
There is a nice rule of thumb that a function should not have more than a loop. So instead of a nested 2 for loops, we define this helper function which computes the average of consecutive duplicates and also returns the end of the consecutive duplicates range:
// prereq: [begin, end) has at least one element
// i.e. begin != end
template <class It>
auto compute_average_duplicates(It begin, It end) -> std::pair<int, It>
// (sadly not C++17) concepts:
//requires requires(It i) { {*i} -> Ply; }
{
auto it = begin + 1;
int sum = begin->greyscale;
for (; it != end && *begin == *it; ++it) {
sum += it->greyscale;
}
// you might need rounding instead of truncation:
return std::make_pair(sum / std::distance(begin, it), it);
}
With this we can have our algorithm:
auto foo()
{
std::vector<Ply> v = {{1, 5, 10}, {2, 4, 6}, {1, 5, 2}};
std::sort(std::begin(v), std::end(v));
for (auto i = std::begin(v); i != std::end(v); ++i) {
decltype(i) j;
int average;
std::tie(average, j) = compute_average_duplicates(i, std::end(v));
// C++17 (coming soon in a compiler near you):
// auto [average, j] = compute_average_duplicates(i, std::end(v));
if (i + 1 == j)
continue;
i->greyscale = average;
v.erase(i + 1, j);
// std::vector::erase Invalidates iterators and references
// at or after the point of the erase
// which means i remains valid, and `++i` (from the for) is correct
}
}
You can apply lexicographical sorting first. During sorting you should take care of overflowing greyscale. With current approach you will have some roundoff error, but it will be small as i first sum and only then average.
In the second part you need to remove duplicates from the array. I used additional array of indices to copy every element not more than once. If you have some forbidden value for x, y or greyscale you can use it and thus get along without additional array.
struct PLY {
int x;
int y;
int greyscale;
};
int main()
{
struct comp
{
bool operator()(const PLY &a, const PLY &b) { return a.x != b.x ? a.x < b.x : a.y < b.y; }
};
vector<PLY> v{ {1,1,1}, {1,2,2}, {1,1,2}, {1,3,5}, {1,2,7} };
sort(begin(v), end(v), comp());
vector<bool> ind(v.size(), true);
int s = 0;
for (int i = 1; i < v.size(); ++i)
{
if (v[i].x == v[i - 1].x &&v[i].y == v[i - 1].y)
{
v[s].greyscale += v[i].greyscale;
ind[i] = false;
}
else
{
int d = i - s;
if (d != 1)
{
v[s].greyscale /= d;
}
s = i;
}
}
s = 0;
for (int i = 0; i < v.size(); ++i)
{
if (ind[i])
{
if (s != i)
{
v[s] = v[i];
}
++s;
}
}
v.resize(s);
}
So you need to check, is PLY a1 { 1,1,1 }; duplicates PLY a2 {2,2,1};
So simple method is to override operator == to check a1.x == a2.x and a1.y == a2.y. After you can write own function removeDuplicates(std::vector<PLU>& mPLY); which will use iterators of this vector, compare and remove. But better to use std::list if you want to remove from middle of array too frequently.

How to replace (or speed up) the parallel comparison when the number of pairs are extremely huge?

The run time of following code, parallel comparsion, takes forever, when the number of key in the map is huge(e.g 100000) and each of its second element have huge element(e.g 100000) as well.
Is there any possible way to speed up the the comparsion? My cpu is Xeon E5450 3.00G 4 Cores. Ram is fair enough.
// There is a map with long as its key and vector<long> as second element,
// the vector's elements are increasing sorted.
map<long, vector<long> > = aMap() ;
map<long, vector<long> >::iterator it1 = aMap.begin() ;
map<long, vector<long> >::iterator it2;
// the code need compare each key's second elements
for( ; it1 != aMap.end(); it1++ ) {
it2 = it1;
it2++;
// Parallel comparsion: THE MOST TIME CONSUMING PART
for( ; it2 != aMap.end(); it2++ ) {
unsigned long i = 0, j = 0, _union = 0, _inter = 0 ;
while( i < it1->second.size() && j < it2->second.size() ) {
if( it1->second[i] < it2->second[j] ) {
i++;
} else if( it1->second[i] > it2->second[j] ) {
j++;
} else {
i++; j++; _inter++;
}
}
_union = it1->second.size() + it2->second.size() - _inter;
if ( (double) _inter / _union > THRESH )
cout << it1->first << " might appears frequently with " << it2->first << endl;
}
}
(This is not a complete answer. It only solves part of your problem; the part about bit manipulation.)
Here's a class you might be able to use to calculate the number of intersections between two sets (the cardinality of the intersection) rather quickly.
It uses a bit vector to store the sets, which means the universe of the possible set members must be small.
#include <cassert>
#include <vector>
class BitVector
{
// IMPORTANT: U must be unsigned
// IMPORTANT: use unsigned long long in 64-bit builds.
typedef unsigned long U;
static const unsigned UBits = 8 * sizeof(U);
public:
BitVector (unsigned size)
: m_bits ((size + UBits - 1) / UBits, 0)
, m_size (size)
{
}
void set (unsigned bit)
{
assert (bit < m_size);
m_bits[bit / UBits] |= (U)1 << (bit % UBits);
}
void clear (unsigned bit)
{
assert (bit < m_size);
m_bits[bit / UBits] &= ~((U)1 << (bit % UBits));
}
unsigned countIntersect (BitVector const & that) const
{
assert (m_size == that.m_size);
unsigned ret = 0;
for (std::vector<U>::const_iterator i = m_bits.cbegin(),
j = that.m_bits.cbegin(), e = m_bits.cend(), f = that.m_bits.cend();
i != e && j != f; ++i, ++j)
{
U x = *i & *j;
// Count the number of 1 bits in x and add it to ret
// There are much better ways than this,
// e.g. using the POPCNT instruction or intrinsic
while (x != 0)
{
ret += x & 1;
x >>= 1;
}
}
return ret;
}
unsigned countUnion (BitVector const & that) const
{
assert (m_size == that.m_size);
unsigned ret = 0;
for (std::vector<U>::const_iterator i = m_bits.cbegin(),
j = that.m_bits.cbegin(), e = m_bits.cend(), f = that.m_bits.cend();
i != e && j != f; ++i, ++j)
{
U x = *i | *j;
while (x != 0)
{
ret += x & 1;
x >>= 1;
}
}
return ret;
}
private:
std::vector<U> m_bits;
unsigned m_size;
};
And here's a very small test program to see how you can use the above class. It makes two sets (each with 100K maximum elements), adds some items to them (using the set() member function) and then calculate their intersection 1 billion times. It runs in under two seconds on my machine.
#include <iostream>
using namespace std;
int main ()
{
unsigned const SetSize = 100000;
BitVector a (SetSize), b (SetSize);
for (unsigned i = 0; i < SetSize; i += 2) a.set (i);
for (unsigned i = 0; i < SetSize; i += 3) b.set (i);
unsigned x = a.countIntersect (b);
cout << x << endl;
return 0;
}
Don't forget to compile this with optimizations enabled! Otherwise it performs very badly.
POPCNT
Modern processors have an instruction to count the number of set bits in a word, called POPCNT. This is quite a lot faster than doing the naive thing written above (as a side note, there are faster ways to do it in software as well, but I didn't want to pollute the code.)
Anyways, the way to use POPCNT in C/C++ code is to use a compiler intrinsic or built-in. In MSVC, you can use __popcount() intrinsic that works on 32-bit integers. In GCC, you can use __builtin_popcountl() for 32-bit integers and __builtin_popcountll() for 64 bits. Be warned that these functions may not be available due to your compiler version, target architecture and/or compile switches.
Maybe you would like to try PPL. Or some of its analogues. I don't really understand what your code suppose to do, as it doesn't seem to have any output. But side effects free code can be effectively parallelized with tools like Parallel Patterns Library.