I keep on receiving this error vector subscript out of range when I run the below section of my code its triggered on this line interval.push_back (peaks_position_vector[n+1]-peaks_position_vector[n]);
I think the issue is with the victor size as im going over the vector size. I tried to initialize n with 100 to test it and decrements n. the error didn't occur. can some one help how can I fix it according to my requirements ? thanks in advance. ( feel free to ask about other sections of the code I only mentioned this part as this is where the error occurred only.)
double ECG::compute_heart_rate(vector<double>& peaks_position_vector)
{
const int s = 60; // 'const' for good practice such that the variable s never changes in execution
vector<double> interval;
// compute pair-wise differences
for (unsigned int n = 0; n < peaks_position_vector.size()-1; n++)
{
interval.push_back (peaks_position_vector[n+1]-peaks_position_vector[n]);
// create heart rate vector
vector<double> heart_rate;
for (unsigned int j = 0; j < interval.size()-1; j++)
{
double hh = s/interval[j];
heart_rate.push_back(hh);
// calculate mean heart rate
double mean_heart_rate, return_value = 0;
int x= 0;
for (unsigned int i=0; i < heart_rate.size(); i++)
{
return_value += heart_rate[i];
x++;
mean_heart_rate = return_value / x;
}
}
}
} // end of code block
I'm guessing that
n < peaks_position_vector.size()-1
should be
n + 1 < peaks_position_vector.size()
Suppose peaks_position_vector.size() equals zero, what do you think that peaks_position_vector.size()-1 evaluates to given that size() returns an unsigned value, and zero is the smallest possible unsigned value?
The implementation you have mention have a bug with divided by zero problem. Where you are doing this you consider a vector input as {4,4,4,4,5,5,5,5}
interval.push_back (peaks_position_vector[n+1]-peaks_position_vector[n]);
....
....
double hh = s/interval[j];
heart_rate.push_back(hh)
...
...
return_value += heart_rate[i];
as you can see that would mess up return_value in your case( should have a value of inf ). And of course what #john have mentioned in the earlier answer also holds true.
Related
I have to write a code that does this for a university assignment. I have written the average function of the code but I'm getting an error message for the other function.
The DEVBOARD_readAccelerometer function will read x,y,z components of acceleration
Firstly, you will need to write a function:
int average(int *array, int nLen);
which returns the average value of an array of integer values. (To prevent overflow, it is suggested to use a long internal variable).
The Spirit Level function will run a loop. For each iteration the Z-component of gravitational acceleration will be sampled four times at 50ms intervals and stored in a suitably sized array.
Using the average() function you have written, determine the average value and analyze
int average(int* array, int nlen) { // assuming array is int
long sum = 0L; // sum will be larger than an item, long for safety.
for (int i = 0; i < nlen; i++) {
sum += array[i];
}
return ((long)sum) / nlen;
}
float sum[3];
int j;
for (int j = 0; j < 3; j++) {
i = DEVBOARD_readAccelerometer(int* xAccel, int* yAccel, int* zAccel);
sum[j] = int * zAccel;
}
average(float* sum[j], float nlen);
printf("The average zcomponent is %f\n");
}
unqualified-id before for
You declared variable j 2 times. One before the for loop and one inside the for definition.
I am trying to solve this problem:
Given a string array words, find the maximum value of length(word[i]) * length(word[j]) where the two words do not share common letters. You may assume that each word will contain only lower case letters. If no such two words exist, return 0.
https://leetcode.com/problems/maximum-product-of-word-lengths/
You can create a bitmap of char for each word to check if they share chars in common and then calc the max product.
I have two method almost equal but the first pass checks, while the second is too slow, can you understand why?
class Solution {
public:
int maxProduct2(vector<string>& words) {
int len = words.size();
int *num = new int[len];
// compute the bit O(n)
for (int i = 0; i < len; i ++) {
int k = 0;
for (int j = 0; j < words[i].length(); j ++) {
k = k | (1 <<(char)(words[i].at(j)));
}
num[i] = k;
}
int c = 0;
// O(n^2)
for (int i = 0; i < len - 1; i ++) {
for (int j = i + 1; j < len; j ++) {
if ((num[i] & num[j]) == 0) { // if no common letters
int x = words[i].length() * words[j].length();
if (x > c) {
c = x;
}
}
}
}
delete []num;
return c;
}
int maxProduct(vector<string>& words) {
vector<int> bitmap(words.size());
for(int i=0;i<words.size();++i) {
int k = 0;
for(int j=0;j<words[i].length();++j) {
k |= 1 << (char)(words[i][j]);
}
bitmap[i] = k;
}
int maxProd = 0;
for(int i=0;i<words.size()-1;++i) {
for(int j=i+1;j<words.size();++j) {
if ( !(bitmap[i] & bitmap[j])) {
int x = words[i].length() * words[j].length();
if ( x > maxProd )
maxProd = x;
}
}
}
return maxProd;
}
};
Why the second function (maxProduct) is too slow for leetcode?
Solution
The second method does repetitive call to words.size(). If you save that in a var than it working fine
Since my comment turned out to be correct I'll turn my comment into an answer and try to explain what I think is happening.
I wrote some simple code to benchmark on my own machine with two solutions of two loops each. The only difference is the call to words.size() is inside the loop versus outside the loop. The first solution is approximately 13.87 seconds versus 16.65 seconds for the second solution. This isn't huge, but it's about 20% slower.
Even though vector.size() is a constant time operation that doesn't mean it's as fast as just checking against a variable that's already in a register. Constant time can still have large variances. When inside nested loops that adds up.
The other thing that could be happening (someone much smarter than me will probably chime in and let us know) is that you're hurting your CPU optimizations like branching and pipelining. Every time it gets to the end of the the loop it has to stop, wait for the call to size() to return, and then check the loop variable against that return value. If the cpu can look ahead and guess that j is still going to be less than len because it hasn't seen len change (len isn't even inside the loop!) it can make a good branch prediction each time and not have to wait.
#include <cstdio>
#include <algorithm>
#include <cmath>
using namespace std;
int main() {
int t,m,n;
scanf("%d",&t);
while(t--)
{
scanf("%d %d",&m,&n);
int rootn=sqrt(double(n));
bool p[10000]; //finding prime numbers from 1 to square_root(n)
for(int j=0;j<=rootn;j++)
p[j]=true;
p[0]=false;
p[1]=false;
int i=rootn;
while(i--)
{
if(p[i]==true)
{
int c=i;
do
{
c=c+i;
p[c]=false;
}while(c+p[i]<=rootn);
}
};
i=0;
bool rangep[10000]; //used for finding prime numbers between m and n by eliminating multiple of primes in between 1 and squareroot(n)
for(int j=0;j<=n-m+1;j++)
rangep[j]=true;
i=rootn;
do
{
if(p[i]==true)
{
for(int j=m;j<=n;j++)
{
if(j%i==0&&j!=i)
rangep[j-m]=false;
}
}
}while(i--);
i=n-m;
do
{
if(rangep[i]==true)
printf("%d\n",i+m);
}while(i--);
printf("\n");
}
return 0;
system("PAUSE");
}
Hello I'm trying to use the sieve of Eratosthenes to find prime numbers in a range between m to n where m>=1 and n<=100000000. When I give input of 1 to 10000, the result is correct. But for a wider range, the stack is overflowed even if I increase the array sizes.
A simple and more readable implementation
void Sieve(int n) {
int sqrtn = (int)sqrt((double)n);
std::vector<bool> sieve(n + 1, false);
for (int m = 2; m <= sqrtn; ++m) {
if (!sieve[m]) {
cout << m << " ";
for (int k = m * m; k <= n; k += m)
sieve[k] = true;
}
}
for (int m = sqrtn; m <= n; ++m)
if (!sieve[m])
cout << m << " ";
}
Reason of getting error
You are declaring an enormous array as a local variable. That's why when the stack frame of main is pushed it needs so much memory that stack overflow exception is generated. Visual studio is tricky enough to analyze the code for projected run-time stack usage and generate exception when needed.
Use this compact implementation. Moreover you can have bs declared in the function if you want. Don't make implementations complex.
Implementation
typedef long long ll;
typedef vector<int> vi;
vi primes;
bitset<100000000> bs;
void sieve(ll upperbound) {
_sieve_size = upperbound + 1;
bs.set();
bs[0] = bs[1] = 0;
for (ll i = 2; i <= _sieve_size; i++)
if (bs[i]) { //if not marked
for (ll j = i * i; j <= _sieve_size; j += i) //check all the multiples
bs[j] = 0; // they are surely not prime :-)
primes.push_back((int)i); // this is prime
} }
call from main() sieve(10000);. You have primes list in vector primes.
Note: As mentioned in comment--stackoverflow is quite unexpected error here. You are implementing sieve but it will be more efficient if you use bistet instead of bool.
Few things like if n=10^8 then sqrt(n)=10^4. And your bool array is p[10000]. So there is a chance of accessing array out of bound.
I agree with the other answers,
saying that you should basically just start over.
Do you even care why your code doesn’t work? (You didn’t actually ask.)
I’m not sure that the problem in your code
has been identified accurately yet.
First of all, I’ll add this comment to help set the context:
// For any int aardvark;
// p[aardvark] = false means that aardvark is composite (i.e., not prime).
// p[aardvark] = true means that aardvark might be prime, or maybe we just don’t know yet.
Now let me draw your attention to this code:
int i=rootn;
while(i--)
{
if(p[i]==true)
{
int c=i;
do
{
c=c+i;
p[c]=false;
}while(c+p[i]<=rootn);
}
};
You say that n≤100000000 (although your code doesn’t check that), so,
presumably, rootn≤10000, which is the dimensionality (size) of p[].
The above code is saying that, for every integer i
(no matter whether it’s prime or composite),
2×i, 3×i, 4×i, etc., are, by definition, composite.
So, for c equal to 2×i, 3×i, 4×i, …,
we set p[c]=false because we know that c is composite.
But look closely at the code.
It sets c=c+i and says p[c]=false
before checking whether c is still in range
to be a valid index into p[].
Now, if n≤25000000, then rootn≤5000.
If i≤ rootn, then i≤5000, and, as long as c≤5000, then c+i≤10000.
But, if n>25000000, then rootn>5000,†
and the sequence i=rootn;, c=i;, c=c+i;
can set c to a value greater than 10000.
And then you use that value to index into p[].
That’s probably where the stack overflow occurs.
Oh, BTW; you don’t need to say if(p[i]==true); if(p[i]) is good enough.
To add insult to injury, there’s a second error in the same block:
while(c+p[i]<=rootn).
c and i are ints,
and p is an array of bools, so p[i] is a bool —
and yet you are adding c + p[i].
We know from the if that p[i] is true,
which is numerically equal to 1 —
so your loop termination condition is while (c+1<=rootn);
i.e., while c≤rootn-1.
I think you meant to say while(c+i<=rootn).
Oh, also, why do you have executable code
immediately after an unconditional return statement?
The system("PAUSE"); statement cannot possibly be reached.
(I’m not saying that those are the only errors;
they are just what jumped out at me.)
______________
† OK, splitting hairs, n has to be ≥ 25010001
(i.e., 50012) before rootn>5000.
Hello I'm having issues calculating the mean of in my function, the program compiles however I don't get the intended answer of 64.2 to print out and instead get a random set of integers and characters.
This is not the entirety of the code but only the appropriate variables and functions.
// main function and prototyping would be here
int size=0;
float values[]={10.1, 9.2, 7.9, 9.2, 13.0, 12.7, 11.3};
float mean(float values[], int size)
{
float sum = 0;
float mean = 0;
for (size = 0; size > 7; size++)
{
sum += values[size];
mean = sum / 7;
}
return mean;
}
Change your loop like so:
for (size = 0; size < 7; size++)
{
sum += values[size];
}
mean = sum / 7;
Your terminating condition for for loop isn't right.
Move the mean out of for loop.
for (size = 0; size > 7; size++)
Since size is initialized as 0, and it is incremented by 1, it becomes 1 at the end of the first iteration and fails the test (it is not > 7). Thus, it immediately exits the loop.
Secondly, you calculate mean inside the loop when you should calculate it after the loop is complete. Theoretically, you should get a correct value since you redo it as the mean of the sums to that point in the loop, but it is a waste of time. You also wipe out size by redefining it.
float mean(float values[], int size)
{
float sum = 0;
float mymean = 0;
for (int i = 0; i < size; i++)
{
sum += values[i];
}
mymean = sum / size;
return mymean;
}
Why is the test size > 7 there? Expecting your initial value to have an unusually large value of zero? It's likely that you mean size < 7, though using arbitrary magic numbers like that is trouble.
What you probably want is:
float mean(float* values, int size)
{
float sum = 0;
for (int i = 0; i < size; ++i)
sum += values[i];
return sum / size;
}
To be more C++ you'd want that signature to be:
float mean(const float* values, const size_t size)
That way you'd catch any mistakes with modifying those values.
I have a vector of numbers between 1 and 100(this is not important) which can take sizes between 3 and 1.000.000 values.
If anyone can help me getting 3 value unique* combinations from that vector.
*Unique
Example: I have in the array the following values: 1[0] 5[1] 7[2] 8[3] 7[4] (the [x] is the index)
In this case 1[0] 5[1] 7[2] and 1[3] 5[1] 7[4] are different, but 1[0] 5[1] 7[2] and 7[2] 1[0] 5[1] are the same(duplicate)
My algorithm is a little slow when i work with a lot of values(example 1.000.000). So what i want is a faster way to do it.
for(unsigned int x = 0;x<vect.size()-2;x++){
for(unsigned int y = x+1;y<vect.size()-1;y++){
for(unsigned int z = y+1;z<vect.size();z++)
{
// do thing with vect[x],vect[y],vect[z]
}
}
}
In fact it is very very important that your values are between 1 and 100! Because with a vector of size 1,000,000 you have a lot of numbers that are equal and you don't need to inspect all of them! What you can do is the following:
Note: the following code is just an outline! It may lack sufficient error checking and is just here to give you the idea, not for copy paste!
Note2: When I wrote the answer, I assumed the numbers to be in the range [0, 99]. Then I read that they are actually in [1, 100]. Obviously this is not a problem and you can either -1 all the numbers or even better, change all the 100s to 101s.
bool exists[100] = {0}; // exists[i] means whether i exists in your vector
for (unsigned int i = 0, size = vect.size(); i < size; ++i)
exists[vect[i]] = true;
Then, you do similar to what you did before:
for(unsigned int x = 0; x < 98; x++)
if (exists[x])
for(unsigned int y = x+1; y < 99; y++)
if (exists[y])
for(unsigned int z = y+1; z < 100; z++)
if (exists[z])
{
// {x, y, z} is an answer
}
Another thing you can do is spend more time in preparation to have less time generating the pairs. For example:
int nums[100]; // from 0 to count are the numbers you have
int count = 0;
for (unsigned int i = 0, size = vect.size(); i < size; ++i)
{
bool exists = false;
for (int j = 0; j < count; ++j)
if (vect[i] == nums[j])
{
exists = true;
break;
}
if (!exists)
nums[count++] = vect[i];
}
Then
for(unsigned int x = 0; x < count-2; x++)
for(unsigned int y = x+1; y < count-1; y++)
for(unsigned int z = y+1; z < count; z++)
{
// {nums[x], nums[y], nums[z]} is an answer
}
Let us consider 100 to be a variable, so let's call it k, and the actual numbers present in the array as m (which is smaller than or equal to k).
With the first method, you have O(n) preparation and O(m^2*k) operations to search for the value which is quite fast.
In the second method, you have O(nm) preparation and O(m^3) for generation of the values. Given your values for n and m, the preparation takes too long.
You could actually merge the two methods to get the best of both worlds, so something like this:
int nums[100]; // from 0 to count are the numbers you have
int count = 0;
bool exists[100] = {0}; // exists[i] means whether i exists in your vector
for (unsigned int i = 0, size = vect.size(); i < size; ++i)
{
if (!exists[vect[i]])
nums[count++] = vect[i];
exists[vect[i]] = true;
}
Then:
for(unsigned int x = 0; x < count-2; x++)
for(unsigned int y = x+1; y < count-1; y++)
for(unsigned int z = y+1; z < count; z++)
{
// {nums[x], nums[y], nums[z]} is an answer
}
This method has O(n) preparation and O(m^3) cost to find the unique triplets.
Edit: It turned out that for the OP, the same number in different locations are considered different values. If that is really the case, then I'm sorry, there is no faster solution. The reason is that all the possible combinations themselves are C(n, m) (That's a combination) that although you are generating each one of them in O(1), it is still too big for you.
There's really nothing that can be done to speed up the loop body you have there. Consider that with 1M vector size, you are making one trillion loop iterations.
Producing all combinations like that is an exponential problem, which means that you won't be able to practically solve it when the input size becomes large enough. Your only option would be to leverage specific knowledge of your application (what you need the results for, and how exactly they will be used) to "work around" the issue if possible.
Possibly you can sort your input, make it unique, and pick x[a], x[b] and x[c] when a < b < c. The sort will be O(n log n) and picking the combination will be O(n³). Still you will have less triplets to iterate over:
std::vector<int> x = original_vector;
std::sort(x.begin(), x.end());
std::erase(std::unique(x.begin(), x.end()), x.end());
for(a = 0; a < x.size() - 2; ++a)
for(b=a+1; b < x.size() - 1; ++b)
for(c=b+1; c< x.size(); ++c
issue triplet(x[a],x[b],x[c]);
Depending on your actual data, you may be able to speed it up significantly by first making a vector that has at most three entries with each value and iterate over that instead.
As r15habh pointed out, I think the fact that the values in the array are between 1-100 is in fact important.
Here's what you can do: make one pass through the array, reading values into a unique set. This by itself is O(n) time complexity. The set will have no more than 100 elements, which means O(1) space complexity.
Now since you need to generate all 3-item permutations, you'll still need 3 nested loops, but instead of operating on the potentially huge array, you'll be operating on a set that has at most 100 elements.
Overall time complexity depends on your original data set. For a small data set, time complexity will be O(n^3). For a large data set, it will approach O(n).
If understand your application correctly then you can use a tuple instead, and store in either a set or hash table depending on your requirements. If the normal of the tri matters, then make sure that you shift the tri so that lets say the largest element is first, if normal shouldn't matter, then just sort the tuple. A version using boost & integers:
#include <set>
#include <algorithm>
#include "boost/tuple/tuple.hpp"
#include "boost/tuple/tuple_comparison.hpp"
int main()
{
typedef boost::tuple< int, int, int > Tri;
typedef std::set< Tri > TriSet;
TriSet storage;
// 1 duplicate
int exampleData[4][3] = { { 1, 2, 3 }, { 2, 3, 6 }, { 5, 3, 2 }, { 2, 1, 3 } };
for( unsigned int i = 0; i < sizeof( exampleData ) / sizeof( exampleData[0] ); ++i )
{
std::sort( exampleData[i], exampleData[i] + ( sizeof( exampleData[i] ) / sizeof( exampleData[i][0] ) ) );
if( !storage.insert( boost::make_tuple( exampleData[i][0], exampleData[i][1], exampleData[i][2] ) ).second )
std::cout << "Duplicate!" << std::endl;
else
std::cout << "Not duplicate!" << std::endl;
}
}