Suppose we a have a vector V consisting of 20 floating point numbers. Is it possible to insert values between each pair of these floating points such that vector V becomes a vector of exactly 50 numbers.
The inserted value should be a random number between upper and lower value I've decided to insert a midpoint of two values between the two.
I have try the following:
vector<double> upsample(vector<double>& in)
{
vector<double> temp;
for (int i = 1; i <= in.size() - 1 ; i++)
{
double sample = (in[i] + in[i - 1]) / 2;
temp.push_back(in[i - 1]);
temp.push_back(sample);
}
temp.push_back(in.back());
return temp;
}
with this function the input vector elements increases by 2(n) - 1 (20 elements becomes 39). It can be possible that the input vector has different sizes less than 50.
I think it can be done by inserting more than one value between two elements randomly to have a vector of size 50 (e.g. between V[0] and V[1] insert 3 values, between V[3] and V[4] insert 1 value, etc.). Is this possible?
Could you please guide me how to perform this?
Thank you.
So I did some math myself, because I was curious how to get the weight ratios (as if upsampling linearly up to common multiple and then extracting only target values from large array - but without creating the large array, just using weights to know how much left+right elements contribute to particular value).
The sample code does always create new value by simple weighted average (i.e. 40% of 123.4 with 60% 567.8 will give "upscaled" value 390.04), no random used to pepper the upscaled values (leaving that part to OP).
The ratios goes like this:
if vector of size M is being upscaled to size N (M <= N) (the "upscale" will always preserve first and last element of input vector, those are "fixed" in this proposal)
then every upscaled element can be viewed as being somewhere between some original elements [i, i+1].
If we would declare the "distance" between the source elements [i, i+1] equal to d = N-1, then the upscaled element distance between can be always expressed as some j/d where j:[0,d] (when j is actual d, it's precisely at "i+1" element and can be considered the same case as j=0 but with [i+1,i+2] source elements)
And the distance between two upscaled elements is then M-1.
So when source vector is size 4, and upscaled vector size should be 5, the ratios for upscaled elements are [ [4/4,0/4], [1/4,3/4], [2/4,2/4], [3/4,1/4], [0/4,4/4] ] of elements (indices into vector) [ [0,1], [0,1], [1,2], [2, 3], [2, 3] ].
("distance" between source elements is 5-1=4, that's then the "/4" to normalize weights, "distance" between upscaled elements is 4-1=3, that's why the ratios are shifting by [-3,+3] in every step).
I'm afraid that my description is far from "obvious" (how it feels in my head after figuring it out), but if you will put some of that into spreadsheet and toy around with it, hopefully it will make some sense. Or maybe you can debug the code to get better feel how that mumbling above transforms into real code.
Code example 1, this one will "copy" source element only if the weight is precisely fully onto it (i.e. in the example data only first and last element are "copied", rest of the upscaled elements are weighted averages of original values).
#include <iostream>
#include <vector>
#include <cassert>
static double get_upscale_value(const size_t total_weight, const size_t right_weight, const double left, const double right) {
// do the simple weighted average for demonstration purposes
const size_t left_weight = total_weight - right_weight;
return (left * left_weight + right * right_weight) / total_weight;
}
std::vector<double> upsample_weighted(std::vector<double>& in, size_t n)
{
assert( 2 <= in.size() && in.size() <= n ); // this is really only upscaling (can't downscale)
// resulting vector variable
std::vector<double> upscaled;
upscaled.reserve(n);
// upscaling factors variables and constants
size_t index_left = 0; // first "left" item is the in[0] element
size_t weight_right = 0; // and "right" has zero weight (i.e. in[0] is copied)
const size_t in_weight = n - 1; // total weight of single "in" element
const size_t weight_add = in.size() - 1; // shift of weight between "upscaled" elements
while (upscaled.size() < n) { // add N upscaled items
if (0 == weight_right) {
// full weight of left -> just copy it (never tainted by "upscaling")
upscaled.push_back(in[index_left]);
} else {
// the weight is somewhere between "left" and "right" items of "in" vector
// i.e. weight = 1..(in_weight-1) ("in_weight" is full "right" value, never happens)
double upscaled_val = get_upscale_value(in_weight, weight_right, in[index_left], in[index_left+1]);
upscaled.push_back(upscaled_val);
}
weight_right += weight_add;
if (in_weight <= weight_right) {
// the weight shifted so much that "right" is new "left"
++index_left;
weight_right -= in_weight;
}
}
return upscaled;
}
int main(int argc, const char *argv[])
{
std::vector<double> in { 10, 20, 30 };
// std::vector<double> in { 20, 10, 40 };
std::vector<double> upscaled = upsample_weighted(in, 14);
std::cout << "upsample_weighted from " << in.size() << " to " << upscaled.size() << ": ";
for (const auto i : upscaled) {
std::cout << i << " ";
}
std::cout << std::endl;
return 0;
}
output:
upsample_weighted from 3 to 14: 10 11.5385 13.0769 14.6154 16.1538 17.6923 19.2308 20.7692 22.3077 23.8462 25.3846 26.9231 28.4615 30
Code example 2, this one will "copy" every source elements and use weighted average only to fill up gaps between, so as much of original data as possible is preserved (for the price of the result being not linearly-upscaled of original data set, but "aliased" to "grid" defined by target size):
(the code is pretty much identical to first one, except one if line in the upscaler)
#include <iostream>
#include <vector>
#include <cassert>
static double get_upscale_value(const size_t total_weight, const size_t right_weight, const double left, const double right) {
// do the simple weighted average for demonstration purposes
const size_t left_weight = total_weight - right_weight;
return (left * left_weight + right * right_weight) / total_weight;
}
// identical to "upsample_weighted", except all source values from "in" are copied into result
// and only extra added values (to make the target size) are generated by "get_upscale_value"
std::vector<double> upsample_copy_preferred(std::vector<double>& in, size_t n)
{
assert( 2 <= in.size() && in.size() <= n ); // this is really only upscaling (can't downscale)
// resulting vector variable
std::vector<double> upscaled;
upscaled.reserve(n);
// upscaling factors variables and constants
size_t index_left = 0; // first "left" item is the in[0] element
size_t weight_right = 0; // and "right" has zero weight (i.e. in[0] is copied)
const size_t in_weight = n - 1; // total weight of single "in" element
const size_t weight_add = in.size() - 1; // shift of weight between "upscaled" elements
while (upscaled.size() < n) { // add N upscaled items
/* ! */ if (weight_right < weight_add) { /* ! this line is modified */
// most of the weight on left -> copy it (don't taint it by upscaling)
upscaled.push_back(in[index_left]);
} else {
// the weight is somewhere between "left" and "right" items of "in" vector
// i.e. weight = 1..(in_weight-1) ("in_weight" is full "right" value, never happens)
double upscaled_val = get_upscale_value(in_weight, weight_right, in[index_left], in[index_left+1]);
upscaled.push_back(upscaled_val);
}
weight_right += weight_add;
if (in_weight <= weight_right) {
// the weight shifted so much that "right" is new "left"
++index_left;
weight_right -= in_weight;
}
}
return upscaled;
}
int main(int argc, const char *argv[])
{
std::vector<double> in { 10, 20, 30 };
// std::vector<double> in { 20, 10, 40 };
std::vector<double> upscaled = upsample_copy_preferred(in, 14);
std::cout << "upsample_copy_preferred from " << in.size() << " to " << upscaled.size() << ": ";
for (const auto i : upscaled) {
std::cout << i << " ";
}
std::cout << std::endl;
return 0;
}
output:
upsample_copy_preferred from 3 to 14: 10 11.5385 13.0769 14.6154 16.1538 17.6923 19.2308 20 22.3077 23.8462 25.3846 26.9231 28.4615 30
(notice how "20.7692" from example 1 is here just "20" - copy of original sample, even if at that point the "30" has some small weight if linear interpolation is considered)
Related
How to divide elements in an array into a minimum number of arrays such that the difference between the values of elements of each of the formed arrays does not differ by more than 1?
Let's say that we have an array: [4, 6, 8, 9, 10, 11, 14, 16, 17].
The array elements are sorted.
I want to divide the elements of the array into a minimum number of array(s) such that each of the elements in the resulting arrays do not differ by more than 1.
In this case, the groupings would be: [4], [6], [8, 9, 10, 11], [14], [16, 17]. So there would be a total of 5 groups.
How can I write a program for the same? Or you can suggest algorithms as well.
I tried the naive approach:
Obtain the difference between consecutive elements of the array and if the difference is less than (or equal to) 1, I add those elements to a new vector. However this method is very unoptimized and straight up fails to show any results for a large number of inputs.
Actual code implementation:
#include<cstdio>
#include<iostream>
#include<vector>
using namespace std;
int main() {
int num = 0, buff = 0, min_groups = 1; // min_groups should start from 1 to take into account the grouping of the starting array element(s)
cout << "Enter the number of elements in the array: " << endl;
cin >> num;
vector<int> ungrouped;
cout << "Please enter the elements of the array: " << endl;
for (int i = 0; i < num; i++)
{
cin >> buff;
ungrouped.push_back(buff);
}
for (int i = 1; i < ungrouped.size(); i++)
{
if ((ungrouped[i] - ungrouped[i - 1]) > 1)
{
min_groups++;
}
}
cout << "The elements of entered vector can be split into " << min_groups << " groups." << endl;
return 0;
}
Inspired by Faruk's answer, if the values are constrained to be distinct integers, there is a possibly sublinear method.
Indeed, if the difference between two values equals the difference between their indexes, they are guaranteed to belong to the same group and there is no need to look at the intermediate values.
You have to organize a recursive traversal of the array, in preorder. Before subdividing a subarray, you compare the difference of indexes of the first and last element to the difference of values, and only subdivide in case of a mismatch. As you work in preorder, this will allow you to emit pieces of the groups in consecutive order, as well as detect to the gaps. Some care has to be taken to merge the pieces of the groups.
The worst case will remain linear, because the recursive traversal can degenerate to a linear traversal (but not worse than that). The best case can be better. In particular, if the array holds a single group, it will be found in time O(1). If I am right, for every group of length between 2^n and 2^(n+1), you will spare at least 2^(n-1) tests. (In fact, it should be possible to estimate an output-sensitive complexity, equal to the array length minus a fraction of the lengths of all groups, or similar.)
Alternatively, you can work in a non-recursive way, by means of exponential search: from the beginning of a group, you start with a unit step and double the step every time, until you detect a gap (difference in values too large); then you restart with a unit step. Here again, for large groups you will skip a significant number of elements. Anyway, the best case can only be O(Log(N)).
I would suggest encoding subsets into an offset array defined as follows:
Elements for set #i are defined for indices j such that offset[i] <= j < offset[i+1]
The number of subsets is offset.size() - 1
This only requires one memory allocation.
Here is a complete implementation:
#include <cassert>
#include <iostream>
#include <vector>
std::vector<std::size_t> split(const std::vector<int>& to_split, const int max_dist = 1)
{
const std::size_t to_split_size = to_split.size();
std::vector<std::size_t> offset(to_split_size + 1);
offset[0] = 0;
size_t offset_idx = 1;
for (std::size_t i = 1; i < to_split_size; i++)
{
const int dist = to_split[i] - to_split[i - 1];
assert(dist >= 0); // we assumed sorted input
if (dist > max_dist)
{
offset[offset_idx] = i;
++offset_idx;
}
}
offset[offset_idx] = to_split_size;
offset.resize(offset_idx + 1);
return offset;
}
void print_partition(const std::vector<int>& to_split, const std::vector<std::size_t>& offset)
{
const std::size_t offset_size = offset.size();
std::cout << "\nwe found " << offset_size-1 << " sets";
for (std::size_t i = 0; i + 1 < offset_size; i++)
{
std::cout << "\n";
for (std::size_t j = offset[i]; j < offset[i + 1]; j++)
{
std::cout << to_split[j] << " ";
}
}
}
int main()
{
std::vector<int> to_split{4, 6, 8, 9, 10, 11, 14, 16, 17};
std::vector<std::size_t> offset = split(to_split);
print_partition(to_split, offset);
}
which prints:
we found 5 sets
4
6
8 9 10 11
14
16 17
Iterate through the array. Whenever the difference between 2 consecutive element is greater than 1, add 1 to your answer variable.
`
int getPartitionNumber(int arr[]) {
//let n be the size of the array;
int result = 1;
for(int i=1; i<n; i++) {
if(arr[i]-arr[i-1] > 1) result++;
}
return result;
}
`
And because it is always nice to see more ideas and select the one that suites you best, here the straight forward 6 line solution. Yes, it is also O(n). But I am not sure, if the overhead for other methods makes it faster.
Please see:
#include <iostream>
#include <string>
#include <algorithm>
#include <vector>
#include <iterator>
using Data = std::vector<int>;
using Partition = std::vector<Data>;
Data testData{ 4, 6, 8, 9, 10, 11, 14, 16, 17 };
int main(void)
{
// This is the resulting vector of vectors with the partitions
std::vector<std::vector<int>> partition{};
// Iterating over source values
for (Data::iterator i = testData.begin(); i != testData.end(); ++i) {
// Check,if we need to add a new partition
// Either, at the beginning or if diff > 1
// No underflow, becuase of boolean shortcut evaluation
if ((i == testData.begin()) || ((*i) - (*(i-1)) > 1)) {
// Create a new partition
partition.emplace_back(Data());
}
// And, store the value in the current partition
partition.back().push_back(*i);
}
// Debug output: Copy all data to std::cout
std::for_each(partition.begin(), partition.end(), [](const Data& d) {std::copy(d.begin(), d.end(), std::ostream_iterator<int>(std::cout, " ")); std::cout << '\n'; });
return 0;
}
Maybe this could be a solution . . .
How do you say your approach is not optimized? If your is correct, then according to your approach, it takes O(n) time complexity.
But you can use binary-search here which can optimize in average case. But in worst case this binary search can take more than O(n) time complexity.
Here's a tips,
As the array sorted so you will pick such a position whose difference is at most 1.
Binary search can do this in simple way.
int arr[] = [4, 6, 8, 9, 10, 11, 14, 16, 17];
int st = 0, ed = n-1; // n = size of the array.
int partitions = 0;
while(st <= ed) {
int low = st, high = n-1;
int pos = low;
while(low <= high) {
int mid = (low + high)/2;
if((arr[mid] - arr[st]) <= 1) {
pos = mid;
low = mid + 1;
} else {
high = mid - 1;
}
}
partitions++;
st = pos + 1;
}
cout<< partitions <<endl;
In average case, it is better than O(n). But in worst case (where the answer would be equal to n) it takes O(nlog(n)) time.
I need to identify the position of a variable from an integer array who has the following properties:
the sum of elements before this variable is equal with the sum of elements after this variable
if the variable doesn't exist, i will show a message.
For example, if x = {1,2,4,2,1}, the result is 4 with position 2, because 1 + 2 == 2 + 1.
Any suggestions? In this example it's easy
if((x[0]+x[1])==(x[3]+x[4]))
print position 2
But for n variables?
There are several ways to do this:
Brute force - n/2 passes:
Loop through the array.
For each element calculate the sum before and after that element.
If they match you found the element.
If the sum before becomes larger than the sum after, stop processing - no match found.
This is not really efficient for larger arrays.
1.5 passes:
Calculate the sum of all elements.
Divide that sum by 2 (half_sum).
Start summing the elements again from the beginning until you reach half_sum.
Check if you found a valid element or not.
Single pass (positive numbers only):
Keep two running sums: one from the beginning (sum1) and one from the end (sum2).
Set sum1 = first element and sum2 = last element.
Check for the smallest of the two and add the next/previous element to that.
Loop until the positions meet and check if the element is a valid result.
For each method you'll have to do a litlle check first to see if the array is not too small.
Special cases to consider:
Empty array: return false
Array with 1 element: return element
Array with 2 nonzero elements: return false
What with all zero's, or groups of zero's in the middle? (see Deduplicator's comment)
Negative elements: single pass version will not work here (see Cris Luengo's comment)
Negative elements in general: not reliable, consider +3 +1 -1 +1 -1 +3 +1 (see Deduplicator's comment)
Here is the O(n) solution.
Keep summing in in one variable from array beginning(left_sum) and keep deducing from the sum of elements except the first one using another(right_sum). When both becomes equal break the loop and print. Otherwise, show your msg.
#include <iostream>
#include <vector>
#include <numeric>
#include <cstddef>
int main()
{
std::vector<int> vec {1,2,4,2,1};
int left_sum = 0;
int right_sum = std::accumulate(vec.cbegin()+1, vec.cend(), 0);
bool Okay = false;
std::size_t index = 1; // start from index 1 until n-1
for( ; index < vec.size() - 1; ++index)
{
left_sum += vec[index-1];
right_sum -= vec[index];
if(left_sum == right_sum)
{
Okay = true;
break;
}
// in the case of array of positive integers
// if(left_sum > right_sum) break;
}
(Okay) ? std::cout << vec[index] << " " << index << std::endl: std::cout << "No such case!\n";
return 0;
}
Thanks for answers. I finally managed it. I used 3 for loops, and s0 is for sum before the element, and s1 is the sum after the element.
for(i=0;i<n;i++)
{s1=0;
s0=0;
for(int j=0;j<i-1;j++)
s0=s0+v[j];
for(int k=i;k<n;k++)
s1=s1+v[k];
if(s0==s1)
{cout<<endl<<"Position i="<<i;
x++;}
}
if(x==0)
cout<<"doesnt exist";
Well, do it in two steps:
Sum all elements.
From first to last:
If the sum equals the current element, success!
Subtract it twice from the sum (once for no longer being on the right, once for being on the left).
Use standard algorithms and range-for, and it's easily written:
auto first_balanced(std::span<const int> x) noexcept {
auto balance = std::accumulate(begin(x), end(x), 0LL);
for (auto&& n : x) {
if (balance == n)
return &n;
balance -= 2 * n;
}
return end(x);
}
It's just looping. You need to sum the elements before and after each index and just compare these two sums:
#include <iostream>
#include <vector>
#include <numeric>
int main() {
std::vector<int> x = {1, 2, 4, 2, 1};
for ( unsigned idx = 0; idx < x.size(); ++idx )
if ( std::accumulate(x.begin(), x.begin() + idx, 0) == std::accumulate(x.begin() + idx + 1, x.end(), 0) )
std::cout << idx << std::endl;
return 0;
}
Trying to build a solution out of std::algorithm,
n+lg n instead of n+~n/2
Warning untested code.
bool HasHalfSum(int& atIndex, const std::vector<int>& v) {
std::vector<int> sum;
sum.reserve(v.size);
std::partial_sum(v.begin(), v.end(), std::back_iterator(sum));
// 1,3,7,9,10
int half = sum.back() / 2; // 5
auto found = std::lower_bound(sum.begin(), sum.end(), half);
if (found != sum.begin() && std::prev(found) == sum.back() - *found) {
index = std::distance(sum.begin(), found);
return true;
}
return false;
}
I have an array of length n. I want to sort the array elements such that my new array elements are like
arr[0] = arr[n/2]
arr[1] = arr[n/4]
arr[2] = arr[3n/4]
arr[3] = arr[n/8]
arr[4] = arr[3n/8]
arr[5] = arr[5n/8]
and so on...
What I have tried, using vectors.
#include <iostream>
#include <algorithm>
#include <vector>
bool myfunc (int l, int r)
{
int m = (l+r)/2;
return m;
}
int main()
{
std::vector<int> myvector = {3,1,20,9,7,5,6,22,17,14,4};
std::sort (myvector.begin(), myvector.end(), myfunc);
for (std::vector<int>::iterator it=myvector.begin(); it!=myvector.end(); ++it)
std::cout << ' ' << *it;
std::cout << '\n';
return 0;
}
So, for an array for length 11, I expect
myvector[0] = arr[5]
myvector[1] = arr[2]
myvector[2] = arr[8]
myvector[3] = arr[0]
myvector[4] = arr[3]
myvector[5] = arr[6]
myvector[6] = arr[9]
myvector[7] = arr[1]
myvector[8] = arr[4]
myvector[9] = arr[7]
myvector[10] = arr[10]
My question is, what should be my function definition of myfunc, such that I get expected output
bool myfunc (int l, int r)
{
int m = (l+r)/2;
//Cant figure out this logic
}
I have tried debugger, but that definitely doesnt help in defining the function! Any clues would be appreciated.
It appears you want a binary search tree (BST) stored in array form, using the same internal represenation which is often used to store a heap.
The expected output is an array such that the one based indexes form a tree, where for any one-based index x, the left node of x is at index 2*x, and the right node of x is at index 2*x+1. Additionally, there are no gaps, meaning every member of the array is used, up to N. (It is a complete binary tree) Since c++ uses zero-based indexing, you need to be careful with this one-based index.
That way of representing a tree is very good for storing a heap data structure, but very bad for a binary search tree where you want to insert things, thus breaking the completeness, and forcing you into a very expensive rebalance.
You asked for a mapping from the sorted array index to this array format. We can build it using a recursive function. This recursive function will take exactly the same amount of work as it would have taken to build the binary tree, and in fact, it is nearly identical to how you would write that function, so this is not an optimal approach. We are doing as much work as the entire problem requires, just to come up with an intermediary step.
The special note here is that we do not want the median. We want to ensure that the left subtree forms a perfect binary tree, so that it fits in the array with no gaps. Therefore, it must have a power of 2, minus 1 nodes. The right subtree can be merely complete.
int log2(int n) {
if (n > 1)
return 1 + log2(n / 2);
return 0;
}
// current_position is the index in bst_indexes
void build_binary_tree_index_mapping(std::vector<int> &bst_indexes, int lower, int upper, int current_position=0) {
if (current_position >= bst_indexes.size())
return;
int power = log2(upper - lower);
int number = 1 << (power); // left subtree must be perfect
int root = lower + number - 1;
// fill current_position
// std::cout << current_position << " = " << root << std::endl;
bst_indexes[current_position] = root;
if (lower < root) {
// fill left subtree
int left_node_position = (current_position + 1) * 2 - 1;
build_binary_tree_index_mapping(bst_indexes, lower, root - 1, left_node_position);
}
if (root < upper) {
// fill right subtree
int right_node_position = (current_position + 1) * 2 + 1 - 1;
build_binary_tree_index_mapping(bst_indexes, root + 1, upper, right_node_position);
}
}
This gives me {7, 3, 9, 1, 5, 8, 10, 0, 2, 4, 6} as the index mapping. It differs from yours because you left spaces in the lower left of the tree, and I am ensuring that the array is completely filled, so I had to shift the bottom row over, then the BST property required reordering everything.
As a side note, in order to use this mapping, you first must sort the data, which is also about the same complexity as the whole problem.
Additionally, the sorted vector already gives you a superior way to do a binary search, using std::binary_search http://en.cppreference.com/w/cpp/algorithm/binary_search.
The problem I am trying to solve is the following: I get N rectangular paper strips with 1cm width and length C. I need to cut the strips at a height where the sum of the areas of the cut strip is equal to A. You can see an example bellow for which N = 5, the strips are of length, 5,3,6,2 and 3 cm and A = 3cm where the cut is made at 4cm.
Note that I'm looking here for the red area.
The input is given as follows. The first line in each case begins with two integers N (1 ≤ N ≤ 10^5) and A (1 ≤ A ≤ 10^9) representing respectively the number of strips and the expected resulting area. The next line contains N integers, representing the length C_i (1 <= C_i <= 10^4) of each strip.
The input ends with A = C = 0, which should not be processed.
For each test case, output a single line, the height H of the cut that must be done so that the sum of the area of the cut strips is equal to A cm². Print the answer with 4 decimal places. Output ":D" if no cutting is required, or "-.-" if it’s impossible.
This problem can be found here
My idea for solving this problem was to use a binary search where I pick a height in the middle of the strips and make it larger or smaller depending on whether my cut was too high or too low. My implementation of the problem is given bellow:
#include <iostream>
#include <vector>
#include <iomanip>
#include <algorithm>
using namespace std;
int main(){
vector<int> v; // Vector that holds paper heights
int n; // Number of papers
double h, // Height of the cut
sum, // Area sum
min_n, // Minimum height for cut to happen
max_n, // Maximum height for cut to happen
a; // Desired final area
// Set desired output
cout << fixed << setprecision(4);
/* Get number of papers and desired area,
terminates if N = A = 0
*/
while(cin >> n >> a && (n||a)){
v.resize(n); // Resize vector to fit all papers
// Get all paper sizes
for(int i=0;i<n;i++){
cin >> v[i];
}
/* Sort the vector in decreasing order to
simplify the search
*/
sort(v.begin(),v.end(),greater<int>());
max_n = v[0]; // Largest possible cut is at the height of the largest paper
min_n = 0; // Smallest possible cut is at the base with height 0
// Iterate until answer is found
while(true){
// Initialize cut height as the average of smallest and largest cut sizes
h = (min_n + max_n)/2;
/* The area sum is equal to the sum of the areas of each cut, which is
given by the height of the paper minus the cut height. If the cut is
higher than the paper, the cut has area 0.
*/
sum = 0;
// Using mascoj sugenstion, a few changes were added
int s; // Temporary variable to hold number of time h is subtracted
for(int i=0; i<n;i++){
if(v[i] <= h) break; // From here onward cut area is 0 and there is no point adding
sum += v[i]; // Removed the subtraction inside of the for loop
s++; // Count how many paper strips were used
}
sum -= h*s // Subtracts the area cut from the s paper strips
// If the error is smaller than the significant value, cut height is printed
if(std::abs(sum-a) < 1e-5){
// If no cut is needed print :D else print cut height
(h < 1e-4 ? cout << ":D" << endl : cout << h << endl);
break;
}
// If max_n is "equal" to min_n and no answer was found, there is no answer
else if(max_n - min_n < 1e-7){
cout << "-.-" << endl;
break;
}
// Reduces search interval
sum < a ? max_n = h : min_n = h;
}
}
return 0;
}
The problem is, after submitting my answer I keep getting a 10% error. The website has a tool for comparing the output of you program with the expected output so I ran a test file with over 1000 randomly generated test cases and when I compared both I got a rounding error on the 4th decimal case, unfortunately, I don't have the file nor the script to generate test cases for me anymore. I tried changing the acceptable error to a smaller one but that didn't work. I can't seem to find the error, does any of you have an idea of what is happening?
ps: Although the problem doesn't say on the description, you can get cuts with fractions as heights
Might be your problem, maybe not: This line is exacerbating floating point error: sum += v[i]-h;
Floating points are only so accurate and compounding this error over a larger summation adds up. I would try using multiplication on h and subtracting that from the total sum of applicable lengths. Should be well within the range of the double precision format so I wouldn't worry about overrunning the format.
Not sure to understand your algorithm but I think that can be done a lot simpler using a map instead a vector.
In the following example the map mp memorize how much (the value) strips are of a given lenght (the key).
An advantage of the map is ordered.
Next you can see how much you have to save (not to cat) and calculate the level of the cut starting from zero, adding 1 when appropriate and adding a fraction when neccessary.
Hope the following example can help
#include <map>
#include <iomanip>
#include <iostream>
int main()
{
int n;
int n2;
int v;
int cut;
int a;
std::map<int, std::size_t> mp;
long long int sum;
long long int ts;
std::cout << std::fixed << std::setprecision(4);
while( (std::cin >> n >> a) && ( n || a ) )
{
mp.clear();
n2 = 0;
sum = 0LL;
for ( auto i = 0 ; i < n ; ++i )
{
std::cin >> v;
if ( v > 0 )
{
sum += v;
++mp[v];
++n2;
}
}
// mp is a map, so the values are ordered
// ts is "to save"; sum of lenghts minus a
ts = sum - a;
// cut level
cut = 0;
// while we can add a full cm to the cut level
while ( (ts > 0LL) && (n2 > 0) && (ts >= n2) )
{
++cut;
ts -= n2;
if ( cut >= mp.cbegin()->first )
{
n2 -= mp.cbegin()->second;
mp.erase(mp.cbegin());
}
}
if ( (ts == 0LL) && (cut == 0) )
std::cout << ":D" << std::endl; // no cut required (?)
else if ( n2 == 0 )
std::cout << "-.-" << std::endl; // impossible (?)
else
std::cout << (cut + double(ts) / n2) << std::endl;
}
}
p.s.: observe that a is defined as an integer in the page that you link.
I'm given a number say N and its corresponding positions in an Array.
Say the positions (indices) given are:
4 5 8 11 13 15 21 28
I'm given two positions (indices) say x and y. Let x=7 and y=13.
I need to find how many occurrences of number is there between x and y (both included, y>=x). Like in above example the number exists at positions 8,11 and 13 which lies between positions x and y and thus answer is 3.
A simple approach would be the naive O(n) algorithm but I want to take advantage of fact that the poistions will always be given in ascending order. I think applying binary search in a modified manner can help but I'm facing facing trouble.
// P is the array that stores positions(indices) of number
int start=0,End=n-1; // n is the size of array P
int mid=(start+End)/2;
int pos1=0,pos2=0;
while(End>start)
{
mid=(start+End)/2;
if(P[mid]>=x && P[mid-1]<x && flag1!=0)
{
pos1=mid;
flag1=0
}
if(P[mid]<=y && P[mid+1]>y && flag2!=0)
{
pos2=mid;
flag2=0;
}
else if (P[mid]<x)
start=mid;
else
End=mid;
}
int Number_Of_Occurence=(pos2-pos1);
Can you please suggest where my code may go wrong?
You can take the advantage of STL library. std::lower_bound or std::upper_bound comes to mind.
Both have logarithmic complexity on sorted containers with random iterators.
For example:
#include <iostream>
#include <algorithm>
#include <vector>
int main() {
std::vector<int> v = {4, 5, 7, 8, 11, 13, 15, 21, 28};
int low_value = 7;
int high_value = 13;
auto low = std::lower_bound(v.begin(), v.end(), low_value);
auto high = std::upper_bound(v.begin(), v.end(), high_value);
std::cout << std::distance(low, high) << " elements in interval ["
<< low_value << ", " << high_value << "]" << std::endl;
return 0;
}
I'm boldly assuming this isn't a homework problem... you need to find the indices of both endpoints however your code only has one "mid" variable. Assuming you reimplement the binary search for both endpoints correctly and you are worried about number of operations, you can re-order the conditional in the if statements so that they short-circuit on flag!=0 before checking two other conditions. ie:
if( !flag1 && P[mid1]>=x && P[mid1-1]<x ) {...}
is technically faster than
if( P[mid1]>=x && P[mid1-1]<x && !flag1 ) {...}
Next, division can be an expensive operation ... and you are dividing by 2. Use a bit shift instead:
jump_size = jump_size >> 2
Now throwing away the flag entirely, we might rewrite the code to look more like this:
// n is the size of array P
// start int the middle
int pos1=pos2=n>>2;
// jump size is how far we jump up or down looking for our index
int jump_size=pos1>>2;
while(jump_size)
{
if(P[pos1]>x) { pos1 -= jump_size; }
else if(P[pos1]<x) { pos1+=jump_size; }
// similar for y and pos2
jump_size=jump_size>>2;
}
you can use floor(x)-ceil(y) to find it in O(log N) time .
below is code for finding ceil()..
int ceilSearch(int arr[], int low, int high, int x)
{
int i;
/* If x is smaller than or equal to first element,
then return the first element */
if(x <= arr[low])
return low;
/* Otherwise, linearly search for ceil value */
for(i = low; i < high; i++)
{
if(arr[i] == x)
return i;
/* if x lies between arr[i] and arr[i+1] including
arr[i+1], then return arr[i+1] */
if(arr[i] < x && arr[i+1] >= x)
return i+1;
}
/* If we reach here then x is greater than the last element
of the array, return -1 in this case */
return -1;
}
You can easily modify it to make floor() function .
Another method is to use lower_bound() and upper_bound() as you are using c++ .