How to find a length and used elements in array structured text - structured-text

Array_Length_u8:=SIZEOF(SkippedElements_au8)/ SIZEOF(SkippedElements_au8[0]);
I am using this formula to find the array length but its doesn't execute and always showing 256 as result

You want to use LOWER_BOUND and UPPER_BOUND.
For example:
ArraySize : DINT;
ThisIsAnArray : ARRAY[1..10] OF DINT;
ArraySize := ABS(UPPER_BOUND(ThisIsAnArray, 1) - LOWER_BOUND(ThisIsAnArray, 1)) + 1;

Related

How to flatten a 3D array where the 3d dimension is not fixed size into 1D array?

Given a 2D array where each (x,y) cell contains a vector of strings (for simplicity) of different size.
What's the most efficient way to flatten this data structure into 1D array, i.e. creating a function mapping injectively each string to {1,...,n} where n is the total number of strings in the data structure.
You can map an index i, j, k to linear position p in O(1) and back in O(log N), where N is the size of the 2D array, not the total number of strings.
First, let's treat your 2D array as a 1D, since that just makes things much easier. Index i is the index of a vector in the array. Index k is the position of a string in the vector. N is the size of the array.
You can create an array of integers (e.g. size_t) that holds the zero-based cumulative sum of all the vector lengths:
lengths = array[N]
lengths[0] = 0
for(i = 1 to N)
lengths[i] = lengths[i - 1] + size(array[i - 1])
If you want, you can compute the total number of strings as total = lengths[N - 1] + size(array[N - 1]).
Now, for a given string at index i, k, the position in the expanded array is just
p = lengths[i] + k
Given a position p, you map it to i, k using a bisection algorithm (binary search that returns the index of the left bound when an exact match isn't found):
i = bisect(lengths, p)
k = p - lengths[i]
Bisection is a simplified binary search, so O(log N).
All this works very nicely until you start expanding your vectors. At that point, insertion and deletion become O(N) operations, since you need to increment or decrement all the cumulative sums past the insertion point. To insert:
array[i][k].push(a_string)
for(z = i + 1 to N)
lengths[z]++
And to delete:
array[i][k].pop()
for(z = i + 1 to N)
lengths[z]--
By the way, if you still want to use indices x, y for the array, you can convert between the linear index i of lengths and back using
i = x + C * y
x = i % C
y = i / C
Here, C is the number of columns in your array. You can easily generalize this to any number of dimensions.
Doesn't simple direct way work for you?
#include <vector>
#include <string>
int main() {
std::vector<std::string> omg[3][4];
std::vector<std::string> rv;
for(auto const &row: omg) {
for(auto const &cell: row) {
for(auto const &str: cell) {
rv.push_back(str);
}
}
}
}

C++ : Fastest Way to find number of elements of one array in another array

I have two arrays, one sorted array int b[] and other unsorted array int a[n] having n elements . The sorted array is made of some or all elements of unsorted array. Now there are M queries. For each query values of l and r are given. In each query I need to find the number of elements of a[n] which are present in b[].
For eg -
N=5 ,M=2
a= [2 5 1 2 3]
b=[3 2 1]
for each m:
l=1 r=5 ->a[1]=1, a[5]=5 -> answer should be 3 as all elements of b i.e 1,2,3 are present in a
l=2 r=4 ->a[2]=5 , a[4]=2 ->answer should be 2 as only 1 and 2 are there in b for given value of l and r for array.
How to find the answer with not more than O(M * LOGN) time complexity ?
NOTE:
Array is not necessary. Vector can also be used that is if it helps in reducing time complexity or easier to implement the code.
Well i think you can do something like this
std::map<int,int> c;
for(int i = 0;i<b.length.i++){
c[b[i]] = 0;
}
for(int i = l; i<=r; i++){
int number = a[i];
c[number]++;
}
//Iterate through c with b index and get all number which different than 0. The left is for you
The purpose of this is creating a map hold index of B. Then while iterating A you can increase the c value. So that after that you can check whether each element in C has value different than zero mean that A has hold the number of B.
You can use array instead of map if C starting from zero and increase by 1 for better performance. Make sure to check if a[i] can throw out of bounds exception if you use array.

how to find distinct substrings?

Given a string, and a fixed length l, how can I count the number of distinct substrings whose length is l?
The size of character set is also known. (denote it as s)
For example, given a string "PccjcjcZ", s = 4, l = 3,
then there are 5 distinct substrings:
“Pcc”; “ccj”; “cjc”; “jcj”; “jcZ”
I try to use hash table, but the speed is still slow.
In fact I don't know how to use the character size.
I have done things like this
int diffPatterns(const string& src, int len, int setSize) {
int cnt = 0;
node* table[1 << 15];
int tableSize = 1 << 15;
for (int i = 0; i < tableSize; ++i) {
table[i] = NULL;
}
unsigned int hashValue = 0;
int end = (int)src.size() - len;
for (int i = 0; i <= end; ++i) {
hashValue = hashF(src, i, len);
if (table[hashValue] == NULL) {
table[hashValue] = new node(i);
cnt ++;
} else {
if (!compList(src, i, table[hashValue], len)) {
cnt ++;
};
}
}
for (int i = 0; i < tableSize; ++i) {
deleteList(table[i]);
}
return cnt;
}
Hastables are fine and practical, but keep in mind that if the length of substrings is L, and the whole string length is N, then the algorithm is Theta((N+1-L)*L) which is Theta(NL) for most L. Remember, just computing the hash takes Theta(L) time. Plus there might be collisions.
Suffix trees can be used, and provide a guaranteed O(N) time algorithm (count number of paths at depth L or greater), but the implementation is complicated. Saving grace is you can probably find off the shelf implementations in the language of your choice.
The idea of using a hashtable is good. It should work well.
The idea of implementing your own hashtable as an array of length 2^15 is bad. See Hashtable in C++? instead.
You can use an unorder_set and insert the strings into the set and then get the size of the set. Since the values in a set are unique it will take care of not including substrings that are the same as ones previously found. This should give you close to O(StringSize - SubstringSize) complexity
#include <iostream>
#include <string>
#include <unordered_set>
int main()
{
std::string test = "PccjcjcZ";
std::unordered_set<std::string> counter;
size_t substringSize = 3;
for (size_t i = 0; i < test.size() - substringSize + 1; ++i)
{
counter.insert(test.substr(i, substringSize));
}
std::cout << counter.size();
std::cin.get();
return 0;
}
Veronica Kham answered good to the question, but we can improve this method to expected O(n) and still use a simple hash table rather than suffix tree or any other advanced data structure.
Hash function
Let X and Y are two adjacent substrings of length L, more precisely:
X = A[i, i + L - 1]
Y = B[i + 1, i + 1 + L - 1]
Let assign to each letter of our alphabet a single non negative integer, for example a := 1, b := 2 and so on.
Let's define a hash function h now:
h(A[i, j]) := (P^(L-1) * A[i] + P^(L-2) * A[i + 1] + ... + A[j]) % M
where P is a prime number ideally greater than the alphabet size and M is a very big number denoting the number of different possible hashes, for example you can set M to maximum available unsigned long long int in your system.
Algorithm
The crucial observation is the following:
If you have a hash computed for X, you can compute a hash for Y in
O(1) time.
Let assume that we have computed h(X), which can be done in O(L) time obviously. We want to compute h(Y). Notice that since X and Y differ by only 2 characters, and we can do that easily using addition and multiplication:
h(Y) = ((h(X) - P^L * A[i]) * P) + A[j + 1]) % M
Basically, we are subtracting letter A[i] multiplied by its coefficient in h(X), multiplying the result by P in order to get proper coefficients for the rest of letters and at the end, we are adding the last letter A[j + 1].
Notice that we can precompute powers of P at the beginning and we can do it modulo M.
Since our hashing functions returns integers, we can use any hash table to store them. Remember to make all computations modulo M and avoid integer overflow.
Collisions
Of course, there might occur a collision, but since P is prime and M is really huge, it is a rare situation.
If you want to lower the probability of a collision, you can use two different hashing functions, for example by using different modulo in each of them. If probability of a collision is p using one such function, then for two functions it is p^2 and we can make it arbitrary small by this trick.
Use Rolling hashes.
This will make the runtime expected O(n).
This might be repeating pkacprzak's answer, except, it gives a name for easier remembrance etc.
Suffix Automaton also can finish it in O(N).
It's easy to code, but hard to understand.
Here are papers about it http://dl.acm.org/citation.cfm?doid=375360.375365
http://www.sciencedirect.com/science/article/pii/S0304397509002370

C++: How to pick out last quarter of elements in a vector?

What is the best way to pick out the last quarter of the elements in a vector containg N elements?
size_t n = src.size();
std::vector<int> dest(src.begin() + (3*n)/4, src.end());
dest contains the last quarter elements from the source vector src.
You can also use std::copy from <algorithm> header file as,
std::vector<int> dest_copy;
std::copy(src.begin() + (3*n)/4, src.end(), std::back_inserter(dest_copy));
See the online demo at ideone : http://ideone.com/qrVod
I think, you may want to work more on the expression (3*n)/4. Like when n is say 5, you want to pick 1 element only, but when n is 7, you may want to pick 2 instead of 1. So this decision is upto you. My solution just tells you how would you copy the elements, once you decide exactly how many!
Something like this, I guess:
size_t lastQuarter = myVector.size() * 3 / 4;
for (size_t i = lastQuarter; i < myVector.size(); i++)
{
doSomething(myVector.at(i));
}

find middle elements from an array

In C++ how can i find the middle 'n' elements of an array? For example if n=3, and the array is [0,1,5,7,7,8,10,14,20], the middle is [7,7,8].
p.s. in my context, n and the elements of array are odd numbers, so i can find the middle.
Thanks!
This is quick, not tested but the basic idea...
const int n = 5;
// Get middle index
int arrLength = sizeof(myArray) / sizeof(int);
int middleIndex = (arrLength - 1) / 2;
// Get sides
int side = (n - 1) / 2;
int count = 0;
int myNewArray[n];
for(int i = middleIndex - side; i <= middleIndex + side; i++){
myNewArray[count++] = myArray[i];
}
int values[] = {0,1,2,3,4,5,6,7,8};
const size_t total(sizeof(values) / sizeof(int));
const size_t needed(3);
vector<int> middle(needed);
std::copy(values + ((total - needed) / 2),
values + ((total + needed) / 2), middle.begin());
Have not checked this with all possible boundary conditions. With the sample data I get middle = (3,4,5), as desired.
Well, if you have to pick n numbers, you know there will be size - n unpicked items. As you want to pick numbers in the middle, you want to have as many 'unpicked' number on each side of the array, that is (size - n) / 2.
I won't do your homework, but I hope this will help.
Well, the naive algorithm follows:
Find the middle, which exists because you specified that the length is odd.
Repeatedly pick off one element to the left and one element to the right. You can always do this because you specified that n is odd.
You can also make the following observation:
Note that after you've picked the middle, there are n - 1 elements remaining to pick off. This is an even number and (n - 1)/2 must come from the left of the middle element and (n - 1)/2 must come from the right. The middle element has index (length - 1)/2. Therefore, the lower index of the first element selected is (length - 1)/2 - (n - 1)/2 and the upper index of the last element selected is (length - 1)/2 + (n - 1)/2. Consequently, the indices needed are (length - n)/2 - 1 to (length + n)/2 - 1.