The task (from a Bulgarian judge, click on "Език" to change it to English):
I am given the size of the first (S1 = A) of N corals. The size of every subsequent coral (Si, where i > 1) is calculated using the formula (B*Si-1 + C)%D, where A, B, C and D are some constants. I am told that Nemo is nearby the Kth coral (when the sizes of all corals are sorted in ascending order).
What is the size of the above-mentioned Kth coral ?
I will have T tests and for every one of them I will be given N, K, A, B, C and D and prompted to output the size of the Kth coral.
The requirements:
1 ≤ T ≤ 3
1 ≤ K ≤ N ≤ 107
0 ≤ A < D ≤ 1018
1 ≤ C, B*D ≤ 1018
Memory available is 64 MB
Time limit is 1.9 sec
The problem I have:
For the worst case scenario I will need 107*8B which is 76 MB.
The solution If the memory available was at least 80 MB would be:
#include <iostream>
#include <vector>
#include <iterator>
#include <algorithm>
using biggie = long long;
int main() {
int t;
std::cin >> t;
int i, n, k, j;
biggie a, b, c, d;
std::vector<biggie>::iterator it_ans;
for (i = 0; i != t; ++i) {
std::cin >> n >> k >> a >> b >> c >> d;
std::vector<biggie> lut{ a };
lut.reserve(n);
for (j = 1; j != n; ++j) {
lut.emplace_back((b * lut.back() + c) % d);
}
it_ans = std::next(lut.begin(), k - 1);
std::nth_element(lut.begin(), it_ans, lut.end());
std::cout << *it_ans << '\n';
}
return 0;
}
Question 1: How can I approach this CP task given the requirements listed above ?
Question 2: Is it somehow possible to use std::nth_element to solve it since I am not able to store all N elements ? I mean using std::nth_element in a sliding window technique (If this is possible).
# Christian Sloper
#include <iostream>
#include <queue>
using biggie = long long;
int main() {
int t;
std::cin >> t;
int i, n, k, j, j_lim;
biggie a, b, c, d, prev, curr;
for (i = 0; i != t; ++i) {
std::cin >> n >> k >> a >> b >> c >> d;
if (k < n - k + 1) {
std::priority_queue<biggie, std::vector<biggie>, std::less<biggie>> q;
q.push(a);
prev = a;
for (j = 1; j != k; ++j) {
curr = (b * prev + c) % d;
q.push(curr);
prev = curr;
}
for (; j != n; ++j) {
curr = (b * prev + c) % d;
if (curr < q.top()) {
q.pop();
q.push(curr);
}
prev = curr;
}
std::cout << q.top() << '\n';
}
else {
std::priority_queue<biggie, std::vector<biggie>, std::greater<biggie>> q;
q.push(a);
prev = a;
for (j = 1, j_lim = n - k + 1; j != j_lim; ++j) {
curr = (b * prev + c) % d;
q.push(curr);
prev = curr;
}
for (; j != n; ++j) {
curr = (b * prev + c) % d;
if (curr > q.top()) {
q.pop();
q.push(curr);
}
prev = curr;
}
std::cout << q.top() << '\n';
}
}
return 0;
}
This gets accepted (Succeeds all 40 tests. Largest time 1.4 seconds, for a test with T=3 and D≤10^9. Largest time for a test with larger D (and thus T=1) is 0.7 seconds.).
#include <iostream>
using biggie = long long;
int main() {
int t;
std::cin >> t;
int i, n, k, j;
biggie a, b, c, d;
for (i = 0; i != t; ++i) {
std::cin >> n >> k >> a >> b >> c >> d;
biggie prefix = 0;
for (int shift = d > 1000000000 ? 40 : 20; shift >= 0; shift -= 20) {
biggie prefix_mask = ((biggie(1) << (40 - shift)) - 1) << (shift + 20);
int count[1 << 20] = {0};
biggie s = a;
int rank = 0;
for (j = 0; j != n; ++j) {
biggie s_vs_prefix = s & prefix_mask;
if (s_vs_prefix < prefix)
++rank;
else if (s_vs_prefix == prefix)
++count[(s >> shift) & ((1 << 20) - 1)];
s = (b * s + c) % d;
}
int i = -1;
while (rank < k)
rank += count[++i];
prefix |= biggie(i) << shift;
}
std::cout << prefix << '\n';
}
return 0;
}
The result is a 60 bits number. I first determine the high 20 bits with one pass through the numbers, then the middle 20 bits in another pass, then the low 20 bits in another.
For the high 20 bits, generate all the numbers and count how often each high 20 bits pattern occurrs. After that, add up the counts until you reach K. The pattern where you reach K, that pattern covers the K-th largest number. In other words, that's the result's high 20 bits.
The middle and low 20 bits are computed similarly, except we take the by then known prefix (the high 20 bits or high+middle 40 bits) into account. As a little optimization, when D is small, I skip computing the high 20 bits. That got me from 2.1 seconds down to 1.4 seconds.
This solution is like user3386109 described, except with bucket size 2^20 instead of 10^6 so I can use bit operations instead of divisions and think of bit patterns instead of ranges.
For the memory constraint you hit:
(B*Si-1 + C)%D
requires only the value (Si-2) before itself. So you can compute them in pairs, to use only 1/2 of total you need. This only needs indexing even values and iterating once for odd values. So you can just use half-length LUT and compute the odd value in-flight. Modern CPUs are fast enough to do extra calculations like these.
std::vector<biggie> lut{ a_i,a_i_2,a_i_4,... };
a_i_3=computeOddFromEven(lut[1]);
You can make a longer stride like 4,8 too. If dataset is large, RAM latency is big. So it's like having checkpoints in whole data search space to balance between memory and core usage. 1000-distance checkpoints would put a lot of cpu cycles into re-calculations but then the array would fit CPU's L2/L1 cache which is not bad. When sorting, the maximum re-calc iteration per element would be n=1000 now. O(1000 x size) maybe it's a big constant but maybe somehow optimizable by compiler if some constants really const?
If CPU performance becomes problem again:
write a compiling function that writes your source code with all the "constant" given by user to a string
compile the code using command-line (assuming target computer has some accessible from command line like g++ from main program)
run it and get results
Compiler should enable more speed/memory optimizations when those are really constant in compile-time rather than depending on std::cin.
If you really need to add a hard-limit to the RAM usage, then implement a simple cache with the backing-store as your heavy computations with brute-force O(N^2) (or O(L x N) with checkpoints every L elements as in first method where L=2 or 4, or ...).
Here's a sample direct-mapped cache with 8M long-long value space:
int main()
{
std::vector<long long> checkpoints = {
a_0, a_16, a_32,...
};
auto cacheReadMissFunction = [&](int key){
// your pure computational algorithm here, helper meant to show variable
long long result = checkpoints[key/16];
for(key - key%16 times)
result = iterate(result);
return result;
};
auto cacheWriteMissFunction = [&](int key, long long value){
/* not useful for your algorithm as it doesn't change behavior per element */
// backing_store[key] = value;
};
// due to special optimizations, size has to be 2^k
int cacheSize = 1024*1024*8;
DirectMappedCache<int, long long> cache(cacheSize,cacheReadMissFunction,cacheWriteMissFunction);
std::cout << cache.get(20)<<std::endl;
return 0;
}
If you use a cache-friendly sorting-algorithm, a direct cache access would make a lot of re-use for nearly all the elements in comparisons if you fill the output buffer/terminal with elements one by one by following something like a bitonic-sort-path (that is known in compile-time). If that doesn't work, then you can try accessing files as a "backing-store" of cache for sorting whole array at once. Is file system prohibited for use? Then the online-compiling method above won't work either.
Implementation of a direct mapped cache (don't forget to call flush() after your algorithm finishes, if you use any cache.set() method):
#ifndef DIRECTMAPPEDCACHE_H_
#define DIRECTMAPPEDCACHE_H_
#include<vector>
#include<functional>
#include<mutex>
#include<iostream>
/* Direct-mapped cache implementation
* Only usable for integer type keys in range [0,maxPositive-1]
*
* CacheKey: type of key (only integers: int, char, size_t)
* CacheValue: type of value that is bound to key (same as above)
*/
template< typename CacheKey, typename CacheValue>
class DirectMappedCache
{
public:
// allocates buffers for numElements number of cache slots/lanes
// readMiss: cache-miss for read operations. User needs to give this function
// to let the cache automatically get data from backing-store
// example: [&](MyClass key){ return redis.get(key); }
// takes a CacheKey as key, returns CacheValue as value
// writeMiss: cache-miss for write operations. User needs to give this function
// to let the cache automatically set data to backing-store
// example: [&](MyClass key, MyAnotherClass value){ redis.set(key,value); }
// takes a CacheKey as key and CacheValue as value
// numElements: has to be integer-power of 2 (e.g. 2,4,8,16,...)
DirectMappedCache(CacheKey numElements,
const std::function<CacheValue(CacheKey)> & readMiss,
const std::function<void(CacheKey,CacheValue)> & writeMiss):size(numElements),sizeM1(numElements-1),loadData(readMiss),saveData(writeMiss)
{
// initialize buffers
for(size_t i=0;i<numElements;i++)
{
valueBuffer.push_back(CacheValue());
isEditedBuffer.push_back(0);
keyBuffer.push_back(CacheKey()-1);// mapping of 0+ allowed
}
}
// get element from cache
// if cache doesn't find it in buffers,
// then cache gets data from backing-store
// then returns the result to user
// then cache is available from RAM on next get/set access with same key
inline
const CacheValue get(const CacheKey & key) noexcept
{
return accessDirect(key,nullptr);
}
// only syntactic difference
inline
const std::vector<CacheValue> getMultiple(const std::vector<CacheKey> & key) noexcept
{
const int n = key.size();
std::vector<CacheValue> result(n);
for(int i=0;i<n;i++)
{
result[i]=accessDirect(key[i],nullptr);
}
return result;
}
// thread-safe but slower version of get()
inline
const CacheValue getThreadSafe(const CacheKey & key) noexcept
{
std::lock_guard<std::mutex> lg(mut);
return accessDirect(key,nullptr);
}
// set element to cache
// if cache doesn't find it in buffers,
// then cache sets data on just cache
// writing to backing-store only happens when
// another access evicts the cache slot containing this key/value
// or when cache is flushed by flush() method
// then returns the given value back
// then cache is available from RAM on next get/set access with same key
inline
void set(const CacheKey & key, const CacheValue & val) noexcept
{
accessDirect(key,&val,1);
}
// thread-safe but slower version of set()
inline
void setThreadSafe(const CacheKey & key, const CacheValue & val) noexcept
{
std::lock_guard<std::mutex> lg(mut);
accessDirect(key,&val,1);
}
// use this before closing the backing-store to store the latest bits of data
void flush()
{
try
{
std::lock_guard<std::mutex> lg(mut);
for (size_t i=0;i<size;i++)
{
if (isEditedBuffer[i] == 1)
{
isEditedBuffer[i]=0;
auto oldKey = keyBuffer[i];
auto oldValue = valueBuffer[i];
saveData(oldKey,oldValue);
}
}
}catch(std::exception &ex){ std::cout<<ex.what()<<std::endl; }
}
// direct mapped access
// opType=0: get
// opType=1: set
CacheValue const accessDirect(const CacheKey & key,const CacheValue * value, const bool opType = 0)
{
// find tag mapped to the key
CacheKey tag = key & sizeM1;
// compare keys
if(keyBuffer[tag] == key)
{
// cache-hit
// "set"
if(opType == 1)
{
isEditedBuffer[tag]=1;
valueBuffer[tag]=*value;
}
// cache hit value
return valueBuffer[tag];
}
else // cache-miss
{
CacheValue oldValue = valueBuffer[tag];
CacheKey oldKey = keyBuffer[tag];
// eviction algorithm start
if(isEditedBuffer[tag] == 1)
{
// if it is "get"
if(opType==0)
{
isEditedBuffer[tag]=0;
}
saveData(oldKey,oldValue);
// "get"
if(opType==0)
{
const CacheValue && loadedData = loadData(key);
valueBuffer[tag]=loadedData;
keyBuffer[tag]=key;
return loadedData;
}
else /* "set" */
{
valueBuffer[tag]=*value;
keyBuffer[tag]=key;
return *value;
}
}
else // not edited
{
// "set"
if(opType == 1)
{
isEditedBuffer[tag]=1;
}
// "get"
if(opType == 0)
{
const CacheValue && loadedData = loadData(key);
valueBuffer[tag]=loadedData;
keyBuffer[tag]=key;
return loadedData;
}
else // "set"
{
valueBuffer[tag]=*value;
keyBuffer[tag]=key;
return *value;
}
}
}
}
private:
const CacheKey size;
const CacheKey sizeM1;
std::mutex mut;
std::vector<CacheValue> valueBuffer;
std::vector<unsigned char> isEditedBuffer;
std::vector<CacheKey> keyBuffer;
const std::function<CacheValue(CacheKey)> loadData;
const std::function<void(CacheKey,CacheValue)> saveData;
};
#endif /* DIRECTMAPPEDCACHE_H_ */
You can solve this problem using a Max-heap.
Insert the first k elements into the max-heap. The largest element of these k will now be at the root.
For each remaining element e:
Compare e to the root.
If e is larger than the root, discard it.
If e is smaller than the root, remove the root and insert e into the heap structure.
After all elements have been processed, the k-th smallest element is at the root.
This method uses O(K) space and O(n log n) time.
There’s an algorithm that people often call LazySelect that I think would be perfect here.
With high probability, we make two passes. In the first pass, we save a random sample of size n much less than N. The answer will be around index (K/N)n in the sorted sample, but due to the randomness, we have to be careful. Save the values a and b at (K/N)n ± r instead, where r is the radius of the window. In the second pass, we save all of the values in [a, b], count the number of values less than a (let it be L), and select the value with index K−L if it’s in the window (otherwise, try again).
The theoretical advice on choosing n and r is fine, but I would be pragmatic here. Choose n so that you use most of the available memory; the bigger the sample, the more informative it is. Choose r fairly large as well, but not quite as aggressively due to the randomness.
C++ code below. On the online judge, it’s faster than Kelly’s (max 1.3 seconds on the T=3 tests, 0.5 on the T=1 tests).
#include <algorithm>
#include <cmath>
#include <cstdint>
#include <iostream>
#include <limits>
#include <optional>
#include <random>
#include <vector>
namespace {
class LazySelector {
public:
static constexpr std::int32_t kTargetSampleSize = 1000;
explicit LazySelector() { sample_.reserve(1000000); }
void BeginFirstPass(const std::int32_t n, const std::int32_t k) {
sample_.clear();
mask_ = n / kTargetSampleSize;
mask_ |= mask_ >> 1;
mask_ |= mask_ >> 2;
mask_ |= mask_ >> 4;
mask_ |= mask_ >> 8;
mask_ |= mask_ >> 16;
}
void FirstPass(const std::int64_t value) {
if ((gen_() & mask_) == 0) {
sample_.push_back(value);
}
}
void BeginSecondPass(const std::int32_t n, const std::int32_t k) {
sample_.push_back(std::numeric_limits<std::int64_t>::min());
sample_.push_back(std::numeric_limits<std::int64_t>::max());
const double p = static_cast<double>(sample_.size()) / n;
const double radius = 2 * std::sqrt(sample_.size());
const auto lower =
sample_.begin() + std::clamp<std::int32_t>(std::floor(p * k - radius),
0, sample_.size() - 1);
const auto upper =
sample_.begin() + std::clamp<std::int32_t>(std::ceil(p * k + radius), 0,
sample_.size() - 1);
std::nth_element(sample_.begin(), upper, sample_.end());
std::nth_element(sample_.begin(), lower, upper);
lower_ = *lower;
upper_ = *upper;
sample_.clear();
less_than_lower_ = 0;
equal_to_lower_ = 0;
equal_to_upper_ = 0;
}
void SecondPass(const std::int64_t value) {
if (value < lower_) {
++less_than_lower_;
} else if (upper_ < value) {
} else if (value == lower_) {
++equal_to_lower_;
} else if (value == upper_) {
++equal_to_upper_;
} else {
sample_.push_back(value);
}
}
std::optional<std::int64_t> Select(std::int32_t k) {
if (k < less_than_lower_) {
return std::nullopt;
}
k -= less_than_lower_;
if (k < equal_to_lower_) {
return lower_;
}
k -= equal_to_lower_;
if (k < sample_.size()) {
const auto kth = sample_.begin() + k;
std::nth_element(sample_.begin(), kth, sample_.end());
return *kth;
}
k -= sample_.size();
if (k < equal_to_upper_) {
return upper_;
}
return std::nullopt;
}
private:
std::default_random_engine gen_;
std::vector<std::int64_t> sample_ = {};
std::int32_t mask_ = 0;
std::int64_t lower_ = std::numeric_limits<std::int64_t>::min();
std::int64_t upper_ = std::numeric_limits<std::int64_t>::max();
std::int32_t less_than_lower_ = 0;
std::int32_t equal_to_lower_ = 0;
std::int32_t equal_to_upper_ = 0;
};
} // namespace
int main() {
int t;
std::cin >> t;
for (int i = t; i > 0; --i) {
std::int32_t n;
std::int32_t k;
std::int64_t a;
std::int64_t b;
std::int64_t c;
std::int64_t d;
std::cin >> n >> k >> a >> b >> c >> d;
std::optional<std::int64_t> ans = std::nullopt;
LazySelector selector;
do {
{
selector.BeginFirstPass(n, k);
std::int64_t s = a;
for (std::int32_t j = n; j > 0; --j) {
selector.FirstPass(s);
s = (b * s + c) % d;
}
}
{
selector.BeginSecondPass(n, k);
std::int64_t s = a;
for (std::int32_t j = n; j > 0; --j) {
selector.SecondPass(s);
s = (b * s + c) % d;
}
}
ans = selector.Select(k - 1);
} while (!ans);
std::cout << *ans << '\n';
}
}
I'm trying to solve this problem:
Given an a×b rectangle, your task is to cut it into squares. On each move you can select a rectangle and cut it into two rectangles in such a way that all side lengths remain integers. What is the minimum possible number of moves?
My logic is that the minimum number of cuts means the minimum number of squares; I don't know if it's the correct approach.
I see which side is smaller, Now I know I need to cut bigSide/SmallSide of cuts to have squares of smallSide sides, then I am left with SmallSide and bigSide%smallSide. Then I go on till any side is 0 or both are equal.
#include <iostream>
int main() {
int a, b; std::cin >> a >> b; // sides of the rectangle
int res = 0;
while (a != 0 && b != 0) {
if (a > b) {
if (a % b == 0)
res += a / b - 1;
else
res += a / b;
a = a % b;
} else if (b > a) {
if (b % a == 0)
res += b / a - 1;
else
res += b / a;
b = b % a;
} else {
break;
}
}
std::cout << res;
return 0;
}
When the input is 404 288, my code gives 18, but the right answer is actually 10.
What am I doing wrong?
It seems clear to me that the problem defines each move as cutting a rectangle to two rectangles along the integer lines, and then asks for the minimum number of such cuts. As you can see there is a clear recursive nature in this problem. Once you cut a rectangle to two parts, you can recurse and cut each of them into squares with minimum moves and then sum up the answers. The problem is that the recursion might lead to exponential time complexity which leads us directly do dynamic programming. You have to use memoization to solve it efficiently (worst case time O(a*b*(a+b))) Here is what I'd suggest doing:
#include <iostream>
#include <vector>
using std::vector;
int min_cuts(int a, int b, vector<vector<int> > &mem) {
int min = mem[a][b];
// if already computed, just return the value
if (min > 0)
return min;
// if one side is divisible by the other,
// store min-cuts in 'min'
if (a%b==0)
min= a/b-1;
else if (b%a==0)
min= b/a -1;
// if there's no obvious solution, recurse
else {
// recurse on hight
for (int i=1; i<a/2; i++) {
int m = min_cuts(i,b, mem);
int n = min_cuts(a-i, b, mem);
if (min<0 or m+n+1<min)
min = m + n + 1;
}
// recurse on width
for (int j=1; j<b/2; j++) {
int m = min_cuts(a,j, mem);
int n = min_cuts(a, b-j, mem);
if (min<0 or m+n+1<min)
min = m + n + 1;
}
}
mem[a][b] = min;
return min;
}
int main() {
int a, b; std::cin >> a >> b; // sides of the rectangle
// -1 means the problem is not solved yet,
vector<vector<int> > mem(a+1, vector<int>(b+1, -1));
int res = min_cuts(a,b,mem);
std::cout << res << std::endl;
return 0;
}
The reason the foor loops go up until a/2 and b/2 is that cuting a paper is symmetric: if you cut along vertical line i it is the same as cutting along the line a-i if you flip the paper vertically. This is a little optimization hack that reduces complexity by a factor of 4 overall.
Another little hack is that by knowing that the problem is that if you transpose the paper the result is the same, meaining min_cuts(a,b)=min_cuts(b,a) you can potentially reduce computations by half. But any major further improvement, say a greedy algorithm would take more thinking (if there exists one at all).
The current answer is a good start, especially the suggestions to use memoization or dynamic programming, and potentially efficient enough.
Obviously, all answerers used the first with a sub-par data-structure. Vector-of-Vector has much space and performance overhead, using a (strict) lower triangular matrix stored in an array is much more efficient.
Using the maximum value as sentinel (easier with unsigned) would also reduce complexity.
Finally, let's move to dynamic programming instead of memoization to simplify and get even more efficient:
#include <algorithm>
#include <memory>
#include <utility>
constexpr unsigned min_cuts(unsigned a, unsigned b) {
if (a < b)
std::swap(a, b);
if (a == b || !b)
return 0;
const auto triangle = [](std::size_t n) { return n * (n - 1) / 2; };
const auto p = std::make_unique_for_overwrite<unsigned[]>(triangle(a));
/* const! */ unsigned zero = 0;
const auto f = [&](auto a, auto b) -> auto& {
if (a < b)
std::swap(a, b);
return a == b ? zero : p[triangle(a - 1) + b - 1];
};
for (auto i = 1u; i <= a; ++i) {
for (auto j = 1u; j < i; ++j) {
auto r = -1u;
for (auto k = i / 2; k; --k)
r = std::min(r, f(k, j) + f(i - k, j));
for (auto k = j / 2; k; --k)
r = std::min(r, f(k, i) + f(j - k, i));
f(i, j) = ++r;
}
}
return f(a, b);
}
I want to sort a large array of integers (say 1 millon elements) lexicographically.
Example:
input [] = { 100, 21 , 22 , 99 , 1 , 927 }
sorted[] = { 1 , 100, 21 , 22 , 927, 99 }
I have done it using the simplest possible method:
convert all numbers to strings (very costly because it will take huge memory)
use std:sort with strcmp as comparison function
convert back the strings to integers
Is there a better method than this?
Use std::sort() with a suitable comparison function. This cuts down on the memory requirements.
The comparison function can use n % 10, n / 10 % 10, n / 100 % 10 etc. to access the individual digits (for positive integers; negative integers work a bit differently).
To provide any custom sort ordering, you can provide a comparator to std::sort. In this case, it's going to be somewhat complex, using logarithms to inspect individual digits of your number in base 10.
Here's an example — comments inline describe what's going on.
#include <iostream>
#include <algorithm>
#include <cmath>
#include <cassert>
int main() {
int input[] { 100, 21, 22, 99, 1, 927, -50, -24, -160 };
/**
* Sorts the array lexicographically.
*
* The trick is that we have to compare digits left-to-right
* (considering typical Latin decimal notation) and that each of
* two numbers to compare may have a different number of digits.
*
* This is very efficient in storage space, but inefficient in
* execution time; an approach that pre-visits each element and
* stores a translated representation will at least double your
* storage requirements (possibly a problem with large inputs)
* but require only a single translation of each element.
*/
std::sort(
std::begin(input),
std::end(input),
[](int lhs, int rhs) -> bool {
// Returns true if lhs < rhs
// Returns false otherwise
const auto BASE = 10;
const bool LHS_FIRST = true;
const bool RHS_FIRST = false;
const bool EQUAL = false;
// There's no point in doing anything at all
// if both inputs are the same; strict-weak
// ordering requires that we return `false`
// in this case.
if (lhs == rhs) {
return EQUAL;
}
// Compensate for sign
if (lhs < 0 && rhs < 0) {
// When both are negative, sign on its own yields
// no clear ordering between the two arguments.
//
// Remove the sign and continue as for positive
// numbers.
lhs *= -1;
rhs *= -1;
}
else if (lhs < 0) {
// When the LHS is negative but the RHS is not,
// consider the LHS "first" always as we wish to
// prioritise the leading '-'.
return LHS_FIRST;
}
else if (rhs < 0) {
// When the RHS is negative but the LHS is not,
// consider the RHS "first" always as we wish to
// prioritise the leading '-'.
return RHS_FIRST;
}
// Counting the number of digits in both the LHS and RHS
// arguments is *almost* trivial.
const auto lhs_digits = (
lhs == 0
? 1
: std::ceil(std::log(lhs+1)/std::log(BASE))
);
const auto rhs_digits = (
rhs == 0
? 1
: std::ceil(std::log(rhs+1)/std::log(BASE))
);
// Now we loop through the positions, left-to-right,
// calculating the digit at these positions for each
// input, and comparing them numerically. The
// lexicographic nature of the sorting comes from the
// fact that we are doing this per-digit comparison
// rather than considering the input value as a whole.
const auto max_pos = std::max(lhs_digits, rhs_digits);
for (auto pos = 0; pos < max_pos; pos++) {
if (lhs_digits - pos == 0) {
// Ran out of digits on the LHS;
// prioritise the shorter input
return LHS_FIRST;
}
else if (rhs_digits - pos == 0) {
// Ran out of digits on the RHS;
// prioritise the shorter input
return RHS_FIRST;
}
else {
const auto lhs_x = (lhs / static_cast<decltype(BASE)>(std::pow(BASE, lhs_digits - 1 - pos))) % BASE;
const auto rhs_x = (rhs / static_cast<decltype(BASE)>(std::pow(BASE, rhs_digits - 1 - pos))) % BASE;
if (lhs_x < rhs_x)
return LHS_FIRST;
else if (rhs_x < lhs_x)
return RHS_FIRST;
}
}
// If we reached the end and everything still
// matches up, then something probably went wrong
// as I'd have expected to catch this in the tests
// for equality.
assert("Unknown case encountered");
}
);
std::cout << '{';
for (auto x : input)
std::cout << x << ", ";
std::cout << '}';
// Output: {-160, -24, -50, 1, 100, 21, 22, 927, 99, }
}
Demo
There are quicker ways to calculate the number of digits in a number, but the above will get you started.
Here's another algorithm which does some of the computation before sorting. It seems to be quite fast, despite the additional copying (see comparisons).
Note:
it only supports positive integers
in only supports integers <= std::numeric_limits<int>::max()/10
N.B. you can optimize count_digits and my_pow10; for example, see Three Optimization Tips for C++ from Andrei Alexandrescu and Any way faster than pow() to compute an integer power of 10 in C++?
Helpers:
#include <random>
#include <vector>
#include <utility>
#include <cmath>
#include <iostream>
#include <algorithm>
#include <limits>
#include <iterator>
// non-optimized version
int count_digits(int p) // returns `0` for `p == 0`
{
int res = 0;
for(; p != 0; ++res)
{
p /= 10;
}
return res;
}
// non-optimized version
int my_pow10(unsigned exp)
{
int res = 1;
for(; exp != 0; --exp)
{
res *= 10;
}
return res;
}
Algorithm (note - not in-place):
// helper to provide integers with the same number of digits
template<class T, class U>
std::pair<T, T> lexicographic_pair_helper(T const p, U const maxDigits)
{
auto const digits = count_digits(p);
// append zeros so that `l` has `maxDigits` digits
auto const l = static_cast<T>( p * my_pow10(maxDigits-digits) );
return {l, p};
}
template<class RaIt>
using pair_vec
= std::vector<std::pair<typename std::iterator_traits<RaIt>::value_type,
typename std::iterator_traits<RaIt>::value_type>>;
template<class RaIt>
pair_vec<RaIt> lexicographic_sort(RaIt p_beg, RaIt p_end)
{
if(p_beg == p_end) return {};
auto max = *std::max_element(p_beg, p_end);
auto maxDigits = count_digits(max);
pair_vec<RaIt> result;
result.reserve( std::distance(p_beg, p_end) );
for(auto i = p_beg; i != p_end; ++i)
result.push_back( lexicographic_pair_helper(*i, maxDigits) );
using value_type = typename pair_vec<RaIt>::value_type;
std::sort(begin(result), end(result),
[](value_type const& l, value_type const& r)
{
if(l.first < r.first) return true;
if(l.first > r.first) return false;
return l.second < r.second; }
);
return result;
}
Usage example:
int main()
{
std::vector<int> input = { 100, 21 , 22 , 99 , 1 , 927 };
// generate some numbers
/*{
constexpr int number_of_elements = 1E6;
std::random_device rd;
std::mt19937 gen( rd() );
std::uniform_int_distribution<>
dist(0, std::numeric_limits<int>::max()/10);
for(int i = 0; i < number_of_elements; ++i)
input.push_back( dist(gen) );
}*/
std::cout << "unsorted: ";
for(auto const& e : input) std::cout << e << ", ";
std::cout << "\n\n";
auto sorted = lexicographic_sort(begin(input), end(input));
std::cout << "sorted: ";
for(auto const& e : sorted) std::cout << e.second << ", ";
std::cout << "\n\n";
}
Here's a community wiki to compare the solutions. I took nim's code and made it easily extensible. Feel free to add your solutions and outputs.
Sample runs an old slow computer (3 GB RAM, Core2Duo U9400) with g++4.9 # -O3 -march=native:
number of elements: 1e+03
size of integer type: 4
reference solution: Lightness Races in Orbit
solution "dyp":
duration: 0 ms and 301 microseconds
comparison to reference solution: exact match
solution "Nim":
duration: 2 ms and 160 microseconds
comparison to reference solution: exact match
solution "nyarlathotep":
duration: 8 ms and 126 microseconds
comparison to reference solution: exact match
solution "notbad":
duration: 1 ms and 102 microseconds
comparison to reference solution: exact match
solution "Eric Postpischil":
duration: 2 ms and 550 microseconds
comparison to reference solution: exact match
solution "Lightness Races in Orbit":
duration: 17 ms and 469 microseconds
comparison to reference solution: exact match
solution "pts":
duration: 1 ms and 92 microseconds
comparison to reference solution: exact match
==========================================================
number of elements: 1e+04
size of integer type: 4
reference solution: Lightness Races in Orbit
solution "nyarlathotep":
duration: 109 ms and 712 microseconds
comparison to reference solution: exact match
solution "Lightness Races in Orbit":
duration: 272 ms and 819 microseconds
comparison to reference solution: exact match
solution "dyp":
duration: 1 ms and 748 microseconds
comparison to reference solution: exact match
solution "notbad":
duration: 16 ms and 115 microseconds
comparison to reference solution: exact match
solution "pts":
duration: 15 ms and 10 microseconds
comparison to reference solution: exact match
solution "Eric Postpischil":
duration: 33 ms and 301 microseconds
comparison to reference solution: exact match
solution "Nim":
duration: 17 ms and 83 microseconds
comparison to reference solution: exact match
==========================================================
number of elements: 1e+05
size of integer type: 4
reference solution: Lightness Races in Orbit
solution "Nim":
duration: 217 ms and 4 microseconds
comparison to reference solution: exact match
solution "pts":
duration: 199 ms and 505 microseconds
comparison to reference solution: exact match
solution "dyp":
duration: 20 ms and 330 microseconds
comparison to reference solution: exact match
solution "Eric Postpischil":
duration: 415 ms and 477 microseconds
comparison to reference solution: exact match
solution "Lightness Races in Orbit":
duration: 3955 ms and 58 microseconds
comparison to reference solution: exact match
solution "notbad":
duration: 215 ms and 259 microseconds
comparison to reference solution: exact match
solution "nyarlathotep":
duration: 1341 ms and 46 microseconds
comparison to reference solution: mismatch found
==========================================================
number of elements: 1e+06
size of integer type: 4
reference solution: Lightness Races in Orbit
solution "Lightness Races in Orbit":
duration: 52861 ms and 314 microseconds
comparison to reference solution: exact match
solution "Eric Postpischil":
duration: 4757 ms and 608 microseconds
comparison to reference solution: exact match
solution "nyarlathotep":
duration: 15654 ms and 195 microseconds
comparison to reference solution: mismatch found
solution "dyp":
duration: 233 ms and 779 microseconds
comparison to reference solution: exact match
solution "pts":
duration: 2181 ms and 634 microseconds
comparison to reference solution: exact match
solution "Nim":
duration: 2539 ms and 9 microseconds
comparison to reference solution: exact match
solution "notbad":
duration: 2675 ms and 362 microseconds
comparison to reference solution: exact match
==========================================================
number of elements: 1e+07
size of integer type: 4
reference solution: Lightness Races in Orbit
solution "notbad":
duration: 33425 ms and 423 microseconds
comparison to reference solution: exact match
solution "pts":
duration: 26000 ms and 398 microseconds
comparison to reference solution: exact match
solution "Eric Postpischil":
duration: 56206 ms and 359 microseconds
comparison to reference solution: exact match
solution "Lightness Races in Orbit":
duration: 658540 ms and 342 microseconds
comparison to reference solution: exact match
solution "nyarlathotep":
duration: 187064 ms and 518 microseconds
comparison to reference solution: mismatch found
solution "Nim":
duration: 30519 ms and 227 microseconds
comparison to reference solution: exact match
solution "dyp":
duration: 2624 ms and 644 microseconds
comparison to reference solution: exact match
The algorithms have to be structs with function-call operator templates that support the interface:
template<class RaIt> operator()(RaIt begin, RaIt end);
A copy of the input data is provided as a parameter, the algorithm is expected to provide the result in the same range (e.g. in-place sort).
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
#include <random>
#include <vector>
#include <utility>
#include <cmath>
#include <cassert>
#include <chrono>
#include <cstring>
#include <climits>
#include <functional>
#include <cstdlib>
#include <iomanip>
using duration_t = decltype( std::chrono::high_resolution_clock::now()
- std::chrono::high_resolution_clock::now());
template<class T>
struct result_t
{
std::vector<T> numbers;
duration_t duration;
char const* name;
};
template<class RaIt, class F>
result_t<typename std::iterator_traits<RaIt>::value_type>
apply_algorithm(RaIt p_beg, RaIt p_end, F f, char const* name)
{
using value_type = typename std::iterator_traits<RaIt>::value_type;
std::vector<value_type> inplace(p_beg, p_end);
auto start = std::chrono::high_resolution_clock::now();
f(begin(inplace), end(inplace));
auto end = std::chrono::high_resolution_clock::now();
auto duration = end - start;
return {std::move(inplace), duration, name};
}
// non-optimized version
int count_digits(int p) // returns `0` for `p == 0`
{
int res = 0;
for(; p != 0; ++res)
{
p /= 10;
}
return res;
}
// non-optimized version
int my_pow10(unsigned exp)
{
int res = 1;
for(; exp != 0; --exp)
{
res *= 10;
}
return res;
}
// !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
// paste algorithms here
// !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
int main(int argc, char** argv)
{
using integer_t = int;
constexpr integer_t dist_min = 0;
constexpr integer_t dist_max = std::numeric_limits<integer_t>::max()/10;
constexpr std::size_t default_number_of_elements = 1E6;
const std::size_t number_of_elements = argc>1 ? std::atoll(argv[1]) :
default_number_of_elements;
std::cout << "number of elements: ";
std::cout << std::scientific << std::setprecision(0);
std::cout << (double)number_of_elements << "\n";
std::cout << /*std::defaultfloat <<*/ std::setprecision(6);
std::cout.unsetf(std::ios_base::floatfield);
std::cout << "size of integer type: " << sizeof(integer_t) << "\n\n";
std::vector<integer_t> input;
{
input.reserve(number_of_elements);
std::random_device rd;
std::mt19937 gen( rd() );
std::uniform_int_distribution<> dist(dist_min, dist_max);
for(std::size_t i = 0; i < number_of_elements; ++i)
input.push_back( dist(gen) );
}
auto b = begin(input);
auto e = end(input);
using res_t = result_t<integer_t>;
std::vector< std::function<res_t()> > algorithms;
#define MAKE_BINDER(B, E, ALGO, NAME) \
std::bind( &apply_algorithm<decltype(B),decltype(ALGO)>, \
B,E,ALGO,NAME )
constexpr auto lightness_name = "Lightness Races in Orbit";
algorithms.push_back( MAKE_BINDER(b, e, lightness(), lightness_name) );
algorithms.push_back( MAKE_BINDER(b, e, dyp(), "dyp") );
algorithms.push_back( MAKE_BINDER(b, e, nim(), "Nim") );
algorithms.push_back( MAKE_BINDER(b, e, pts(), "pts") );
algorithms.push_back( MAKE_BINDER(b, e, epost(), "Eric Postpischil") );
algorithms.push_back( MAKE_BINDER(b, e, nyar(), "nyarlathotep") );
algorithms.push_back( MAKE_BINDER(b, e, notbad(), "notbad") );
{
std::srand( std::random_device()() );
std::random_shuffle(begin(algorithms), end(algorithms));
}
std::vector< result_t<integer_t> > res;
for(auto& algo : algorithms)
res.push_back( algo() );
auto reference_solution
= *std::find_if(begin(res), end(res),
[](result_t<integer_t> const& p)
{ return 0 == std::strcmp(lightness_name, p.name); });
std::cout << "reference solution: "<<reference_solution.name<<"\n\n";
for(auto const& e : res)
{
std::cout << "solution \""<<e.name<<"\":\n";
auto ms =
std::chrono::duration_cast<std::chrono::microseconds>(e.duration);
std::cout << "\tduration: "<<ms.count()/1000<<" ms and "
<<ms.count()%1000<<" microseconds\n";
std::cout << "\tcomparison to reference solution: ";
if(e.numbers.size() != reference_solution.numbers.size())
{
std::cout << "ouput count mismatch\n";
break;
}
auto mismatch = std::mismatch(begin(e.numbers), end(e.numbers),
begin(reference_solution.numbers)).first;
if(end(e.numbers) == mismatch)
{
std::cout << "exact match\n";
}else
{
std::cout << "mismatch found\n";
}
}
}
Current algorithms; note I replaced the digit counters and pow-of-10 with the global function, so we all benefit if someone optimizes.
struct lightness
{
template<class RaIt> void operator()(RaIt b, RaIt e)
{
using T = typename std::iterator_traits<RaIt>::value_type;
/**
* Sorts the array lexicographically.
*
* The trick is that we have to compare digits left-to-right
* (considering typical Latin decimal notation) and that each of
* two numbers to compare may have a different number of digits.
*
* This is very efficient in storage space, but inefficient in
* execution time; an approach that pre-visits each element and
* stores a translated representation will at least double your
* storage requirements (possibly a problem with large inputs)
* but require only a single translation of each element.
*/
std::sort(
b,
e,
[](T lhs, T rhs) -> bool {
// Returns true if lhs < rhs
// Returns false otherwise
const auto BASE = 10;
const bool LHS_FIRST = true;
const bool RHS_FIRST = false;
const bool EQUAL = false;
// There's no point in doing anything at all
// if both inputs are the same; strict-weak
// ordering requires that we return `false`
// in this case.
if (lhs == rhs) {
return EQUAL;
}
// Compensate for sign
if (lhs < 0 && rhs < 0) {
// When both are negative, sign on its own yields
// no clear ordering between the two arguments.
//
// Remove the sign and continue as for positive
// numbers.
lhs *= -1;
rhs *= -1;
}
else if (lhs < 0) {
// When the LHS is negative but the RHS is not,
// consider the LHS "first" always as we wish to
// prioritise the leading '-'.
return LHS_FIRST;
}
else if (rhs < 0) {
// When the RHS is negative but the LHS is not,
// consider the RHS "first" always as we wish to
// prioritise the leading '-'.
return RHS_FIRST;
}
// Counting the number of digits in both the LHS and RHS
// arguments is *almost* trivial.
const auto lhs_digits = (
lhs == 0
? 1
: std::ceil(std::log(lhs+1)/std::log(BASE))
);
const auto rhs_digits = (
rhs == 0
? 1
: std::ceil(std::log(rhs+1)/std::log(BASE))
);
// Now we loop through the positions, left-to-right,
// calculating the digit at these positions for each
// input, and comparing them numerically. The
// lexicographic nature of the sorting comes from the
// fact that we are doing this per-digit comparison
// rather than considering the input value as a whole.
const auto max_pos = std::max(lhs_digits, rhs_digits);
for (auto pos = 0; pos < max_pos; pos++) {
if (lhs_digits - pos == 0) {
// Ran out of digits on the LHS;
// prioritise the shorter input
return LHS_FIRST;
}
else if (rhs_digits - pos == 0) {
// Ran out of digits on the RHS;
// prioritise the shorter input
return RHS_FIRST;
}
else {
const auto lhs_x = (lhs / static_cast<decltype(BASE)>(std::pow(BASE, lhs_digits - 1 - pos))) % BASE;
const auto rhs_x = (rhs / static_cast<decltype(BASE)>(std::pow(BASE, rhs_digits - 1 - pos))) % BASE;
if (lhs_x < rhs_x)
return LHS_FIRST;
else if (rhs_x < lhs_x)
return RHS_FIRST;
}
}
// If we reached the end and everything still
// matches up, then something probably went wrong
// as I'd have expected to catch this in the tests
// for equality.
assert("Unknown case encountered");
// dyp: suppress warning and throw
throw "up";
}
);
}
};
namespace ndyp
{
// helper to provide integers with the same number of digits
template<class T, class U>
std::pair<T, T> lexicographic_pair_helper(T const p, U const maxDigits)
{
auto const digits = count_digits(p);
// append zeros so that `l` has `maxDigits` digits
auto const l = static_cast<T>( p * my_pow10(maxDigits-digits) );
return {l, p};
}
template<class RaIt>
using pair_vec
= std::vector<std::pair<typename std::iterator_traits<RaIt>::value_type,
typename std::iterator_traits<RaIt>::value_type>>;
template<class RaIt>
pair_vec<RaIt> lexicographic_sort(RaIt p_beg, RaIt p_end)
{
if(p_beg == p_end) return pair_vec<RaIt>{};
auto max = *std::max_element(p_beg, p_end);
auto maxDigits = count_digits(max);
pair_vec<RaIt> result;
result.reserve( std::distance(p_beg, p_end) );
for(auto i = p_beg; i != p_end; ++i)
result.push_back( lexicographic_pair_helper(*i, maxDigits) );
using value_type = typename pair_vec<RaIt>::value_type;
std::sort(begin(result), end(result),
[](value_type const& l, value_type const& r)
{
if(l.first < r.first) return true;
if(l.first > r.first) return false;
return l.second < r.second; }
);
return result;
}
}
struct dyp
{
template<class RaIt> void operator()(RaIt b, RaIt e)
{
auto pairvec = ndyp::lexicographic_sort(b, e);
std::transform(begin(pairvec), end(pairvec), b,
[](typename decltype(pairvec)::value_type const& e) { return e.second; });
}
};
namespace nnim
{
bool comp(int l, int r)
{
int lv[10] = {}; // probably possible to get this from numeric_limits
int rv[10] = {};
int lc = 10; // ditto
int rc = 10;
while (l || r)
{
if (l)
{
auto t = l / 10;
lv[--lc] = l - (t * 10);
l = t;
}
if (r)
{
auto t = r / 10;
rv[--rc] = r - (t * 10);
r = t;
}
}
while (lc < 10 && rc < 10)
{
if (lv[lc] == rv[rc])
{
lc++;
rc++;
}
else
return lv[lc] < rv[rc];
}
return lc > rc;
}
}
struct nim
{
template<class RaIt> void operator()(RaIt b, RaIt e)
{
std::sort(b, e, nnim::comp);
}
};
struct pts
{
template<class T> static bool lex_less(T a, T b) {
unsigned la = 1, lb = 1;
for (T t = a; t > 9; t /= 10) ++la;
for (T t = b; t > 9; t /= 10) ++lb;
const bool ll = la < lb;
while (la > lb) { b *= 10; ++lb; }
while (lb > la) { a *= 10; ++la; }
return a == b ? ll : a < b;
}
template<class RaIt> void operator()(RaIt b, RaIt e)
{
std::sort(b, e, lex_less<typename std::iterator_traits<RaIt>::value_type>);
}
};
struct epost
{
static bool compare(int x, int y)
{
static const double limit = .5 * (log(INT_MAX) - log(INT_MAX-1));
double lx = log10(x);
double ly = log10(y);
double fx = lx - floor(lx); // Get the mantissa of lx.
double fy = ly - floor(ly); // Get the mantissa of ly.
return fabs(fx - fy) < limit ? lx < ly : fx < fy;
}
template<class RaIt> void operator()(RaIt b, RaIt e)
{
std::sort(b, e, compare);
}
};
struct nyar
{
static bool lexiSmaller(int i1, int i2)
{
int digits1 = count_digits(i1);
int digits2 = count_digits(i2);
double val1 = i1/pow(10.0, digits1-1);
double val2 = i2/pow(10.0, digits2-1);
while (digits1 > 0 && digits2 > 0 && (int)val1 == (int)val2)
{
digits1--;
digits2--;
val1 = (val1 - (int)val1)*10;
val2 = (val2 - (int)val2)*10;
}
if (digits1 > 0 && digits2 > 0)
{
return (int)val1 < (int)val2;
}
return (digits2 > 0);
}
template<class RaIt> void operator()(RaIt b, RaIt e)
{
std::sort(b, e, lexiSmaller);
}
};
struct notbad
{
static int up_10pow(int n) {
int ans = 1;
while (ans < n) ans *= 10;
return ans;
}
static bool compare(int v1, int v2) {
int ceil1 = up_10pow(v1), ceil2 = up_10pow(v2);
while ( ceil1 != 0 && ceil2 != 0) {
if (v1 / ceil1 < v2 / ceil2) return true;
else if (v1 / ceil1 > v2 / ceil2) return false;
ceil1 /= 10;
ceil2 /= 10;
}
if (v1 < v2) return true;
return false;
}
template<class RaIt> void operator()(RaIt b, RaIt e)
{
std::sort(b, e, compare);
}
};
I believe the following works as a sort comparison function for positive integers provided the integer type used is substantially narrower than the double type (e.g., 32-bit int and 64-bit double) and the log10 routine used returns exactly correct results for exact powers of 10 (which a good implementation does):
static const double limit = .5 * (log(INT_MAX) - log(INT_MAX-1));
double lx = log10(x);
double ly = log10(y);
double fx = lx - floor(lx); // Get the mantissa of lx.
double fy = ly - floor(ly); // Get the mantissa of ly.
return fabs(fx - fy) < limit ? lx < ly : fx < fy;
It works by comparing the mantissas of the logarithms. The mantissas are the fractional parts of the logarithm, and they indicate the value of the significant digits of a number without the magnitude (e.g., the logarithms of 31, 3.1, and 310 have exactly the same mantissa).
The purpose of fabs(fx - fy) < limit is to allow for errors in taking the logarithm, which occur both because implementations of log10 are imperfect and because the floating-point format forces some error. (The integer portions of the logarithms of 31 and 310 use different numbers of bits, so there are different numbers of bits left for the significand, so they end up being rounded to slightly different values.) As long as the integer type is substantially narrower than the double type, the calculated limit will be much larger than the error in log10. Thus, the test fabs(fx - fy) < limit essentially tells us whether two calculated mantissas would be equal if calculated exactly.
If the mantissas differ, they indicate the lexicographic order, so we return fx < fy. If they are equal, then the integer portion of the logarithm tells us the order, so we return lx < ly.
It is simple to test whether log10 returns correct results for every power of ten, since there are so few of them. If it does not, adjustments can be made easily: Insert if (1-fx < limit) fx = 0; if (1-fu < limit) fy = 0;. This allows for when log10 returns something like 4.99999… when it should have returned 5.
This method has the advantage of not using loops or division (which is time-consuming on many processors).
The task sounds like a natural fit for an MSD variant of Radix Sort with padding ( http://en.wikipedia.org/wiki/Radix_sort ).
Depends on how much code you want to throw at it. The simple code as the others show is O(log n) complexity, while a fully optimized radix sort would be O(kn).
A compact solution if all your numbers are nonnegative and they are small enough so that multiplying them by 10 doesn't cause an overflow:
template<class T> bool lex_less(T a, T b) {
unsigned la = 1, lb = 1;
for (T t = a; t > 9; t /= 10) ++la;
for (T t = b; t > 9; t /= 10) ++lb;
const bool ll = la < lb;
while (la > lb) { b *= 10; ++lb; }
while (lb > la) { a *= 10; ++la; }
return a == b ? ll : a < b;
}
Run it like this:
#include <iostream>
#include <algorithm>
int main(int, char **) {
unsigned short input[] = { 100, 21 , 22 , 99 , 1 , 927 };
unsigned input_size = sizeof(input) / sizeof(input[0]);
std::sort(input, input + input_size, lex_less<unsigned short>);
for (unsigned i = 0; i < input_size; ++i) {
std::cout << ' ' << input[i];
}
std::cout << std::endl;
return 0;
}
You could try using the % operator to give you access to each individual digit eg 121 % 100 will give you the first digit and check that way but you'll have to find a way to get around the fact they have different sizes.
So find the maximum value in array. I don't know if theres a function for this in built you could try.
int Max (int* pdata,int size)
{
int temp_max =0 ;
for (int i =0 ; i < size ; i++)
{
if (*(pdata+i) > temp_max)
{
temp_max = *(pdata+i);
}
}
return temp_max;
}
This function will return the number of digits in the number
int Digit_checker(int n)
{
int num_digits = 1;
while (true)
{
if ((n % 10) == n)
return num_digits;
num_digits++;
n = n/10;
}
return num_digits;
}
Let number of digits in max equal n.
Once you have this open a for loop in the format of
for (int i = 1; i < n ; i++)
then you can go through your and use "data[i] % (10^(n-i))" to get access to the first digit then
sort that and then on the next iteration you'll get access to the second digit. I Don't know how you'll sort them though.
It wont work for negative numbers and you'll have to get around data[i] % (10^(n-i)) returning itself for numbers with less digits than max
Overload the < operator to compare two integers lexicographically. For each integer, find the smallest 10^k, which is not less than the given integer. Than compare the digits one by one.
class CmpIntLex {
int up_10pow(int n) {
int ans = 1;
while (ans < n) ans *= 10;
return ans;
}
public:
bool operator ()(int v1, int v2) {
int ceil1 = up_10pow(v1), ceil2 = up_10pow(v2);
while ( ceil1 != 0 && ceil2 != 0) {
if (v1 / ceil1 < v2 / ceil2) return true;
else if (v1 / ceil1 > v2 / ceil2) return false;
ceil1 /= 10;
ceil2 /= 10;
}
if (v1 < v2) return true;
return false;
}
int main() {
vector<int> vi = {12,45,12134,85};
sort(vi.begin(), vi.end(), CmpIntLex());
}
While some other answers here (Lightness's, notbad's) are already showing quite good code, I believe I can add one solution which might be more performant (since it requires neither division nor power in each loop; but it requires floating point arithmetic, which again might make it slow, and possibly inaccurate for large numbers):
#include <algorithm>
#include <iostream>
#include <assert.h>
// method taken from http://stackoverflow.com/a/1489873/671366
template <class T>
int numDigits(T number)
{
int digits = 0;
if (number < 0) digits = 1; // remove this line if '-' counts as a digit
while (number) {
number /= 10;
digits++;
}
return digits;
}
bool lexiSmaller(int i1, int i2)
{
int digits1 = numDigits(i1);
int digits2 = numDigits(i2);
double val1 = i1/pow(10.0, digits1-1);
double val2 = i2/pow(10.0, digits2-1);
while (digits1 > 0 && digits2 > 0 && (int)val1 == (int)val2)
{
digits1--;
digits2--;
val1 = (val1 - (int)val1)*10;
val2 = (val2 - (int)val2)*10;
}
if (digits1 > 0 && digits2 > 0)
{
return (int)val1 < (int)val2;
}
return (digits2 > 0);
}
int main(int argc, char* argv[])
{
// just testing whether the comparison function works as expected:
assert (lexiSmaller(1, 100));
assert (!lexiSmaller(100, 1));
assert (lexiSmaller(100, 22));
assert (!lexiSmaller(22, 100));
assert (lexiSmaller(927, 99));
assert (!lexiSmaller(99, 927));
assert (lexiSmaller(1, 927));
assert (!lexiSmaller(927, 1));
assert (lexiSmaller(21, 22));
assert (!lexiSmaller(22, 21));
assert (lexiSmaller(22, 99));
assert (!lexiSmaller(99, 22));
// use the comparison function for the actual sorting:
int input[] = { 100 , 21 , 22 , 99 , 1 ,927 };
std::sort(&input[0], &input[5], lexiSmaller);
std::cout << "sorted: ";
for (int i=0; i<6; ++i)
{
std::cout << input[i];
if (i<5)
{
std::cout << ", ";
}
}
std::cout << std::endl;
return 0;
}
Though I have to admit I haven't tested the performance yet.
Here is the dumb solution that doesn't use any floating point tricks. It's pretty much the same as the string comparison, but doesn't use a string per say, doesn't also handle negative numbers, to do that add a section at the top...
bool comp(int l, int r)
{
int lv[10] = {}; // probably possible to get this from numeric_limits
int rv[10] = {};
int lc = 10; // ditto
int rc = 10;
while (l || r)
{
if (l)
{
auto t = l / 10;
lv[--lc] = l - (t * 10);
l = t;
}
if (r)
{
auto t = r / 10;
rv[--rc] = r - (t * 10);
r = t;
}
}
while (lc < 10 && rc < 10)
{
if (lv[lc] == rv[rc])
{
lc++;
rc++;
}
else
return lv[lc] < rv[rc];
}
return lc > rc;
}
It's fast, and I'm sure it's possible to make it faster still, but it works and it's dumb enough to understand...
EDIT: I ate to dump lots of code, but here is a comparison of all the solutions so far..
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
#include <random>
#include <vector>
#include <utility>
#include <cmath>
#include <cassert>
#include <chrono>
std::pair<int, int> lexicographic_pair_helper(int p, int maxDigits)
{
int digits = std::log10(p);
int l = p*std::pow(10, maxDigits-digits);
return {l, p};
}
bool l_comp(int l, int r)
{
int lv[10] = {}; // probably possible to get this from numeric_limits
int rv[10] = {};
int lc = 10; // ditto
int rc = 10;
while (l || r)
{
if (l)
{
auto t = l / 10;
lv[--lc] = l - (t * 10);
l = t;
}
if (r)
{
auto t = r / 10;
rv[--rc] = r - (t * 10);
r = t;
}
}
while (lc < 10 && rc < 10)
{
if (lv[lc] == rv[rc])
{
lc++;
rc++;
}
else
return lv[lc] < rv[rc];
}
return lc > rc;
}
int up_10pow(int n) {
int ans = 1;
while (ans < n) ans *= 10;
return ans;
}
bool l_comp2(int v1, int v2) {
int n1 = up_10pow(v1), n2 = up_10pow(v2);
while ( v1 != 0 && v2 != 0) {
if (v1 / n1 < v2 / n2) return true;
else if (v1 / n1 > v2 / n2) return false;
v1 /= 10;
v2 /= 10;
n1 /= 10;
n2 /= 10;
}
if (v1 == 0 && v2 != 0) return true;
return false;
}
int main()
{
std::vector<int> numbers;
{
constexpr int number_of_elements = 1E6;
std::random_device rd;
std::mt19937 gen( rd() );
std::uniform_int_distribution<> dist;
for(int i = 0; i < number_of_elements; ++i) numbers.push_back( dist(gen) );
}
std::vector<int> lo(numbers);
std::vector<int> dyp(numbers);
std::vector<int> nim(numbers);
std::vector<int> nb(numbers);
std::cout << "starting..." << std::endl;
{
auto start = std::chrono::high_resolution_clock::now();
/**
* Sorts the array lexicographically.
*
* The trick is that we have to compare digits left-to-right
* (considering typical Latin decimal notation) and that each of
* two numbers to compare may have a different number of digits.
*
* This probably isn't very efficient, so I wouldn't do it on
* "millions" of numbers. But, it works...
*/
std::sort(
std::begin(lo),
std::end(lo),
[](int lhs, int rhs) -> bool {
// Returns true if lhs < rhs
// Returns false otherwise
const auto BASE = 10;
const bool LHS_FIRST = true;
const bool RHS_FIRST = false;
const bool EQUAL = false;
// There's no point in doing anything at all
// if both inputs are the same; strict-weak
// ordering requires that we return `false`
// in this case.
if (lhs == rhs) {
return EQUAL;
}
// Compensate for sign
if (lhs < 0 && rhs < 0) {
// When both are negative, sign on its own yields
// no clear ordering between the two arguments.
//
// Remove the sign and continue as for positive
// numbers.
lhs *= -1;
rhs *= -1;
}
else if (lhs < 0) {
// When the LHS is negative but the RHS is not,
// consider the LHS "first" always as we wish to
// prioritise the leading '-'.
return LHS_FIRST;
}
else if (rhs < 0) {
// When the RHS is negative but the LHS is not,
// consider the RHS "first" always as we wish to
// prioritise the leading '-'.
return RHS_FIRST;
}
// Counting the number of digits in both the LHS and RHS
// arguments is *almost* trivial.
const auto lhs_digits = (
lhs == 0
? 1
: std::ceil(std::log(lhs+1)/std::log(BASE))
);
const auto rhs_digits = (
rhs == 0
? 1
: std::ceil(std::log(rhs+1)/std::log(BASE))
);
// Now we loop through the positions, left-to-right,
// calculating the digit at these positions for each
// input, and comparing them numerically. The
// lexicographic nature of the sorting comes from the
// fact that we are doing this per-digit comparison
// rather than considering the input value as a whole.
const auto max_pos = std::max(lhs_digits, rhs_digits);
for (auto pos = 0; pos < max_pos; pos++) {
if (lhs_digits - pos == 0) {
// Ran out of digits on the LHS;
// prioritise the shorter input
return LHS_FIRST;
}
else if (rhs_digits - pos == 0) {
// Ran out of digits on the RHS;
// prioritise the shorter input
return RHS_FIRST;
}
else {
const auto lhs_x = (lhs / static_cast<decltype(BASE)>(std::pow(BASE, lhs_digits - 1 - pos))) % BASE;
const auto rhs_x = (rhs / static_cast<decltype(BASE)>(std::pow(BASE, rhs_digits - 1 - pos))) % BASE;
if (lhs_x < rhs_x)
return LHS_FIRST;
else if (rhs_x < lhs_x)
return RHS_FIRST;
}
}
// If we reached the end and everything still
// matches up, then something probably went wrong
// as I'd have expected to catch this in the tests
// for equality.
assert("Unknown case encountered");
}
);
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = end - start;
std::cout << "Lightness: " << elapsed.count() << '\n';
}
{
auto start = std::chrono::high_resolution_clock::now();
auto max = *std::max_element(begin(dyp), end(dyp));
int maxDigits = std::log10(max);
std::vector<std::pair<int,int>> temp;
temp.reserve(dyp.size());
for(auto const& e : dyp) temp.push_back( lexicographic_pair_helper(e, maxDigits) );
std::sort(begin(temp), end(temp), [](std::pair<int, int> const& l, std::pair<int, int> const& r)
{ if(l.first < r.first) return true; if(l.first > r.first) return false; return l.second < r.second; });
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = end - start;
std::cout << "Dyp: " << elapsed.count() << '\n';
}
{
auto start = std::chrono::high_resolution_clock::now();
std::sort (nim.begin(), nim.end(), l_comp);
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = end - start;
std::cout << "Nim: " << elapsed.count() << '\n';
}
// {
// auto start = std::chrono::high_resolution_clock::now();
// std::sort (nb.begin(), nb.end(), l_comp2);
// auto end = std::chrono::high_resolution_clock::now();
// auto elapsed = end - start;
// std::cout << "notbad: " << elapsed.count() << '\n';
// }
std::cout << (nim == lo) << std::endl;
std::cout << (nim == dyp) << std::endl;
std::cout << (lo == dyp) << std::endl;
// std::cout << (lo == nb) << std::endl;
}
Based on #Oswald's answer, below is some code that does the same.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
bool compare(string a, string b){
// Check each digit
int i = 0, j = 0;
while(i < a.size() && j < b.size()){
// If different digits
if(a[i] - '0' != b[j] - '0')
return (a[i] - '0' < b[j] - '0');
i++, j++;
}
// Different sizes
return (a.size() < b.size());
}
int main(){
vector<string> array = {"1","2","3","4","5","6","7","8","9","10","11","12"};
sort(array.begin(), array.end(), compare);
for(auto value : array)
cout << value << " ";
return 0;
}
Input: 1 2 3 4 5 6 7 8 9 10 11 12
Output: 1 10 11 12 2 3 4 5 6 7 8 9
I want to convert an integer to binary string and then store each bit of the integer string to an element of a integer array of a given size. I am sure that the input integer's binary expression won't exceed the size of the array specified. How to do this in c++?
Pseudo code:
int value = ???? // assuming a 32 bit int
int i;
for (i = 0; i < 32; ++i) {
array[i] = (value >> i) & 1;
}
template<class output_iterator>
void convert_number_to_array_of_digits(const unsigned number,
output_iterator first, output_iterator last)
{
const unsigned number_bits = CHAR_BIT*sizeof(int);
//extract bits one at a time
for(unsigned i=0; i<number_bits && first!=last; ++i) {
const unsigned shift_amount = number_bits-i-1;
const unsigned this_bit = (number>>shift_amount)&1;
*first = this_bit;
++first;
}
//pad the rest with zeros
while(first != last) {
*first = 0;
++first;
}
}
int main() {
int number = 413523152;
int array[32];
convert_number_to_array_of_digits(number, std::begin(array), std::end(array));
for(int i=0; i<32; ++i)
std::cout << array[i] << ' ';
}
Proof of compilation here
You could use C++'s bitset library, as follows.
#include<iostream>
#include<bitset>
int main()
{
int N;//input number in base 10
cin>>N;
int O[32];//The output array
bitset<32> A=N;//A will hold the binary representation of N
for(int i=0,j=31;i<32;i++,j--)
{
//Assigning the bits one by one.
O[i]=A[j];
}
return 0;
}
A couple of points to note here:
First, 32 in the bitset declaration statement tells the compiler that you want 32 bits to represent your number, so even if your number takes fewer bits to represent, the bitset variable will have 32 bits, possibly with many leading zeroes.
Second, bitset is a really flexible way of handling binary, you can give a string as its input or a number, and again you can use the bitset as an array or as a string.It's a really handy library.
You can print out the bitset variable A as
cout<<A;
and see how it works.
You can do like this:
while (input != 0) {
if (input & 1)
result[index] = 1;
else
result[index] =0;
input >>= 1;// dividing by two
index++;
}
As Mat mentioned above, an int is already a bit-vector (using bitwise operations, you can check each bit). So, you can simply try something like this:
// Note: This depends on the endianess of your machine
int x = 0xdeadbeef; // Your integer?
int arr[sizeof(int)*CHAR_BIT];
for(int i = 0 ; i < sizeof(int)*CHAR_BIT ; ++i) {
arr[i] = (x & (0x01 << i)) ? 1 : 0; // Take the i-th bit
}
Decimal to Binary: Size independent
Two ways: both stores binary represent into a dynamic allocated array bits (in msh to lsh).
First Method:
#include<limits.h> // include for CHAR_BIT
int* binary(int dec){
int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
if(bits == NULL) return NULL;
int i = 0;
// conversion
int left = sizeof(int) * CHAR_BIT - 1;
for(i = 0; left >= 0; left--, i++){
bits[i] = !!(dec & ( 1u << left ));
}
return bits;
}
Second Method:
#include<limits.h> // include for CHAR_BIT
int* binary(unsigned int num)
{
unsigned int mask = 1u << ((sizeof(int) * CHAR_BIT) - 1);
//mask = 1000 0000 0000 0000
int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
if(bits == NULL) return NULL;
int i = 0;
//conversion
while(mask > 0){
if((num & mask) == 0 )
bits[i] = 0;
else
bits[i] = 1;
mask = mask >> 1 ; // Right Shift
i++;
}
return bits;
}
I know it doesn't add as many Zero's as you wish for positive numbers. But for negative binary numbers, it works pretty well.. I just wanted to post a solution for once :)
int BinToDec(int Value, int Padding = 8)
{
int Bin = 0;
for (int I = 1, Pos = 1; I < (Padding + 1); ++I, Pos *= 10)
{
Bin += ((Value >> I - 1) & 1) * Pos;
}
return Bin;
}
This is what I use, it also lets you give the number of bits that will be in the final vector, fills any unused bits with leading 0s.
std::vector<int> to_binary(int num_to_convert_to_binary, int num_bits_in_out_vec)
{
std::vector<int> r;
// make binary vec of minimum size backwards (LSB at .end() and MSB at .begin())
while (num_to_convert_to_binary > 0)
{
//cout << " top of loop" << endl;
if (num_to_convert_to_binary % 2 == 0)
r.push_back(0);
else
r.push_back(1);
num_to_convert_to_binary = num_to_convert_to_binary / 2;
}
while(r.size() < num_bits_in_out_vec)
r.push_back(0);
return r;
}