C++ Matrix horizontal concat - c++

I have 2 matrix, for example:
a1 a2 a3 a4 a5 a6 a7 a8
M1 = b1 b2 b3 b4 M2 = b5 b6 b7 b8
c1 c2 c3 c4 c5 c6 c7 c8
what i want is get a matrix concat like this:
a1 a2 a3 a4 a5 a6 a7 a8
Mr = b1 b2 b3 b4 b5 b6 b7 b8
c1 c2 c3 c4 c5 c6 c7 c8
fast as possible cause my program is all based on this concat at speed of 50MHz.(Sound acquisition)
It's actually neded for read a single line fast(each line is a microphone flow).

If you save your matrix as a std::vector<std::vector<double>>, where the inner vector is one of your rows, you can use std::insert to perform a concatenation of the rows of your matrices.
vector1.insert( vector1.end(), vector2.begin(), vector2.end() );
You might also find a library such as armadillo useful. I has a function join_rows( A, B ), which is doing, what you ask for. With some chance this will have a better performance, than what you can program yourself.

Related

Pine script conditions / conditional-statements

I have 6 conditions:
c1 = ...
c2 = ...
Then if 4 of them are fullfilled (yielding 15 combinations), I will execute some command. How to do this?
E.g.:
cb1 = c1 and c2 and c3 and c4
cb2 = c1 and c2 and c3 and c5
cb3 = c1 and c2 and c3 and c6
cb4 = c1 and c2 and c4 and c5
cb5 = c1 and c2 and c4 and c6
cb6 = c1 and c2 and c5 and c6
cb7 = c1 and c3 and c4 and c5
cb8 = c1 and c3 and c4 and c6
cb9 = c1 and c3 and c5 and c6
cb10 = c1 and c4 and c5 and c6
cb11 = c2 and c3 and c4 and c5
cb12 = c2 and c3 and c4 and c6
cb13 = c2 and c3 and c5 and c6
cb14 = c2 and c4 and c5 and c6
cb15 = c3 and c4 and c5 and c6
// Set up alert
alertcondition(condition=cb1 or cb2 or cb3 or cb4 or cb5 or cb6 or cb7 or cb8 or cb9 or cb10 or cb11 or cb12 or cb13 or cb14 or cb15,
message="cb")
You can read this for some information.
if (condition1 == true) and (condition2 == true) and (condition3 == true) and (condition4 == true)
// Do something
else if (condition2 == true) and (condition3 == true) and (condition4 == true) and (condition5 == true)
// Do something else
Please note the indentation.
Could we do the point system? example
condition1 is ok X=1
condition2 is ok X=x+1
condition3 is ok X=x+3
condition4 is ok X=x+4
if x>0
do this
else if x>1
do this
else if x>2
do this
...

print and save matrix in fortran

Hello everyone I am new to Fortran and I am facing a problem. Let s assume I have a matrix a(5,50)
a1 a2 a3 a4 a5 a6 a7 etc
b1 b2 b3 b4 b5 b6 b7 etc
c1 c2 c3 c4 c5 c6 c7 etc
d1 d2 d3 d4 d5 d6 d7 etc
e1 e2 e3 e4 e5 e6 e7 etc
is there a way to save it into a file and print the matrix like the following
a1 b1 c1 d1 e1
a2 b2 c2 d2 e2
a3 b3 c3 d3 e3
etc
without saving it to another matrix? Because ok i can always do a loop and save it to a new matrix and then save that to a file and print it. I have also created a subroutine to print my matrix in a correct order and be presentable
Sure.
You could loop over the first index, then write the whole column:
do ii = 1, 50
write(unit, '(5(I7))') a(ii, :)
end do
Or you could use transpose:
write(unit, '(5(I7))') transpose(a)
(I'm assuming that a is an integer array and that all values can be written with 6 or fewer digits (including sign). Change the format if that's not the case.)
This computer doesn't have a fortran compiler, so I haven't tested it, but it should work.
Cheers

Loop interval changing unexpectedly

I am writing a loop to remove every third element from an array until there is only one element left.
here is the code...
int elimcnt = 1;//counts how many elements looped through
int cnt = 0;//counts how many elements deleted for printing purposes
for (int i = 0; v.size() > 1; i++, elimcnt++) {
if (i == v.size()) {//reset i to the beginning when it hits the end
i = 0;
}
if (elimcnt%in.M == 0 && elimcnt != 0) {//in.M is elimination index which is 3
v.erase(v.begin() + (elimcnt%v.size()) - 1);
cnt++;
if (cnt%in.K == 0) {//in.K is how often you will print which is after 7 deletes
print_vector(v, cnt);
}
}
}
what actually happens when i run it is that it will correctly delete the first element but after that it deletes every 4th element from there on out.
Here is an example input...
A1 A2 A3 A4 A5 A6 A7 A8 A9 B1 B2 B3
B4 B5 B6 B7 B8 B9 C1 C2 C3 C4 C5 C6
C7 C8 C9 D1 D2 D3 D4 D5 D6 D7 D8 D9
E1 E2 E3 E4 E5
What is supposed to be outputted...
A1 A2 A4 A5 A7 A8 B1 B2 B4 B5 B7 B8
C1 C2 C4 C5 C6 C7 C8 C9 D1 D2 D3 D4
D5 D6 D7 D8 D9 E1 E2 E3 E4 E5
This is what is actually outputted...
A1 A2 A4 A5 A6 A8 A9 B1 B3 B4 B5 B7
B8 B9 C2 C3 C4 C6 C7 C8 D1 D2 D3 D4
D5 D6 D7 D8 D9 E1 E2 E3 E4 E5
I cant seem to figure out what is causing the code to do this so any help will be greatly appreciated.
The problem is in the expression used in the statement
v.erase(v.begin() + (elimcnt%v.size()) - 1);
^^^^^^^^^^^^^^^^^^^^^
Consider a sequence of numbers
1, 2, 3, 4, 5, 6
For the first traversing of the sequence You need to delete 3 and 6
After deleting 3 you will get
1, 2, 4, 5, 6
and the variable elimcnt after the deleting will be incremented and will be equal to 4. However the size of the sequence is now equal to 5. So when elimcnt will be equal to 6 then the expression elimcnt%v.size()) - 1 will be equal to 0 and the element 1 will be deleted.
I could suggest a more safe approach using iterators.
for example
size_t elimcnt = 0;//counts how many elements looped through
size_t cnt = 0;
for (auto it = v.begin(); v.size() > 1; it == v.end() ? it = v.begin() : it )
{
if (++elimcnt % in.M == 0)
{
it = v.erase(it);
if (++cnt % in.K == 0)
{
print_vector(v, cnt);
}
}
else
{
++it;
}
}

Packing and de-interleaving two __m256 registers

I have a row-wise array of floats (~20 cols x ~1M rows) from which I need to extract two columns at a time into two __m256 registers.
...a0.........b0......
...a1.........b1......
// ...
...a7.........b7......
// end first __m256
A naive way to do this is
__m256i vindex = _mm256_setr_epi32(
0,
1 * stride,
2 * stride,
// ...
7 * stride);
__m256 colA = _mm256_i32gather_ps(baseAddrColA, vindex, sizeof(float));
__m256 colB = _mm256_i32gather_ps(baseAddrColB, vindex, sizeof(float));
However, I was wondering if I would get better performance by retrieving a0, b0, a1, b1, a2, b2, a3, b3 in one gather, and a4, b4, ... a7, b7 in another because they're closer in memory, and then de-interleave them. That is:
// __m256 lo = a0 b0 a1 b1 a2 b2 a3 b3 // load proximal elements
// __m256 hi = a4 b4 a5 b5 a6 b6 a7 b7
// __m256 colA = a0 a1 a2 a3 a4 a5 a6 a7 // goal
// __m256 colB = b0 b1 b2 b3 b4 b5 b6 b7
I can't figure out how to nicely interleave lo and hi. I basically need the opposite of _mm256_unpacklo_ps. The best I've come up with is something like:
__m256i idxA = _mm256_setr_epi32(0, 2, 4, 6, 1, 3, 5, 7);
__m256i idxB = _mm256_setr_epi32(1, 3, 5, 7, 0, 2, 4, 6);
__m256 permLA = _mm256_permutevar8x32_ps(lo, idxA); // a0 a1 a2 a3 b0 b1 b2 b3
__m256 permHB = _mm256_permutevar8x32_ps(hi, idxB); // b4 b5 b6 b7 a4 a5 a6 a7
__m256 colA = _mm256_blend_ps(permLA, permHB, 0b11110000); // a0 a1 a2 a3 a4 a5 a6 a7
__m256 colB = _mm256_setr_m128(
_mm256_extractf128_ps(permLA, 1),
_mm256_castps256_ps128(permHB)); // b0 b1 b2 b3 b4 b5 b6 b7
That's 13 cycles. Is there a better way?
(For all I know, prefetch is already optimizing the naive approach as best as possible, but lacking that knowledge, I was hoping to benchmark the second approach. If anyone already knows what the result of this would be, please do share. With the above de-interlacing method, it's about 8% slower than the naive approach.)
Edit Even without the de-interlacing, the "proximal" gather method is about 6% slower than the naive, constant-stride gather method. I take that to mean that this access pattern confuses hardware prefetch too much to be a worthwhile optimization.
// __m256 lo = a0 b0 a1 b1 a2 b2 a3 b3 // load proximal elements
// __m256 hi = a4 b4 a5 b5 a6 b6 a7 b7
// __m256 colA = a0 a1 a2 a3 a4 a5 a6 a7 // goal
// __m256 colB = b0 b1 b2 b3 b4 b5 b6 b7
It seems we can do this shuffle even faster than my orginal answer:
void unpack_cols(__m256i lo, __m256i hi, __m256i& colA, __m256i& colB) {
const __m256i mask = _mm256_setr_epi32(0, 2, 4, 6, 1, 3, 5, 7);
// group cols crossing lanes:
// a0 a1 a2 a3 b0 b1 b2 b3
// a4 a5 a6 a7 b4 b5 b6 b7
auto lo_grouped = _mm256_permutevar8x32_epi32(lo, mask);
auto hi_grouped = _mm256_permutevar8x32_epi32(hi, mask);
// swap lanes:
// a0 a1 a2 a3 a4 a5 a6 a7
// b0 b1 b2 b3 b4 b5 b6 b7
colA = _mm256_permute2x128_si256(lo_grouped, hi_grouped, 0 | (2 << 4));
colB = _mm256_permute2x128_si256(lo_grouped, hi_grouped, 1 | (3 << 4));
}
While both instructions have a 3 cycles latency on Haswell (see Agner Fog) they have a single cycle throughput. This means it has a throughput of 4 cycles and 8 cycles latency. If you have a spare register which can keep the mask this should be better. Doing only two of these in parallel allows you to completly hide its latency. See godbolt and rextester.
Old answer, kept for reference:
The fastest way to do this shuffle is the following:
void unpack_cols(__m256i lo, __m256i hi, __m256i& colA, __m256i& colB) {
// group cols within lanes:
// a0 a1 b0 b1 a2 a3 b2 b3
// a4 a5 b4 b5 a6 a7 b6 b7
auto lo_shuffled = _mm256_shuffle_epi32(lo, _MM_SHUFFLE(3, 1, 2, 0));
auto hi_shuffled = _mm256_shuffle_epi32(hi, _MM_SHUFFLE(3, 1, 2, 0));
// unpack lo + hi a 64 bit
// a0 a1 a4 a5 a2 a3 a6 a7
// b0 b1 b4 b5 b2 b3 b6 b7
auto colA_shuffled = _mm256_unpacklo_epi64(lo_shuffled, hi_shuffled);
auto colB_shuffled = _mm256_unpackhi_epi64(lo_shuffled, hi_shuffled);
// swap crossing lanes:
// a0 a1 a2 a3 a4 a5 a6 a7
// b0 b1 b2 b3 b4 b5 b6 b7
colA = _mm256_permute4x64_epi64(colA_shuffled, _MM_SHUFFLE(3, 1, 2, 0));
colB = _mm256_permute4x64_epi64(colB_shuffled, _MM_SHUFFLE(3, 1, 2, 0));
}
Starting with Haswell this has a throughput of 6 cycles (sadly six instructions on port 5). According to Agner Fog _mm256_permute4x64_epi64 has a latency of 3 cycles. This means unpack_cols has a latency of 11 8 cycles.
You can check the code on godbolt.org or test it at rextester which has AVX2 support but sadly no permalinks like godbolt.
Note that this is also very close to the problem I had where I gathered 64 bit ints and needed the high and low 32 bits separated.
Note that gather performance is really bad in Haswell but according to Agner Fog Skylake got a lot better at it (~12 cycles throughput down to ~5). Still shuffling around such simple patterns should still be a lot faster than gathering.
In order to load columns of 32-bit float type you could use intrinsics _mm256_setr_pd and _mm256_shuffle_ps (it takes 10 cycles):
#include <iostream>
#include <immintrin.h>
inline void Print(const __m256 & v)
{
float b[8];
_mm256_storeu_ps(b, v);
for (int i = 0; i < 8; i++)
std::cout << b[i] << " ";
std::cout << std::endl;
}
int main()
{
const size_t stride = 100;
float m[stride * 8];
for (size_t i = 0; i < stride*8; ++i)
m[i] = (float)i;
const size_t stride2 = stride / 2;
double * p = (double*)m;
__m256 ab0145 = _mm256_castpd_ps(_mm256_setr_pd(p[0 * stride2], p[1 * stride2], p[4 * stride2], p[5 * stride2]));
__m256 ab2367 = _mm256_castpd_ps(_mm256_setr_pd(p[2 * stride2], p[3 * stride2], p[6 * stride2], p[7 * stride2]));
__m256 a = _mm256_shuffle_ps(ab0145, ab2367, 0x88);
__m256 b = _mm256_shuffle_ps(ab0145, ab2367, 0xDD);
Print(a);
Print(b);
return 0;
}
Output:
0 100 200 300 400 500 600 700
1 101 201 301 401 501 601 701
Concerning to performance of intrinsic _mm256_i32gather_ps I would recommend to see here.
I assume that a and b are placed in 0,10, then 1,11 to 9,19 if not chnge the vindexm[] as you want ;
If you want to use gather instruction:
//#includes
#define Distance 20 // number of columns.
float a[32][20]__attribute__(( aligned(32)))= {{1.01,1.02,1.03,1.04,1.05,1.06,1.07,1.08,1.09,1.10,1.11,1.12,1.13,1.14,1.15,1.16},
{2.01,2.02,2.03,2.04,2.05,2.06,2.07,2.08,2.09,2.10,2.11,2.12,2.13,2.14,2.15,2.16},
{3.01,3.02,3.03,3.04,3.05,3.06,3.07,3.08,3.09,3.10,3.11,3.12,3.13,3.14,3.15,3.16},
{4.01,4.02,4.03,4.04,4.05,4.06,4.07,4.08,4.09,4.10,4.11,4.12,4.13,4.14,4.15,4.16},
{5.01,5.02,5.03,5.04,5.05,5.06,5.07,5.08,5.09,5.10,5.11,5.12,5.13,5.14,5.15,5.16},
{6.01,6.02,6.03,6.04,6.05,6.06,6.07,6.08,6.09,6.10,6.11,6.12,6.13,6.14,6.15,6.16},
{7.01,7.02,7.03,7.04,7.05,7.06,7.07,7.08,7.09,7.10,7.11,7.12,7.13,7.14,7.15,7.16},
{8.01,8.02,8.03,8.04,8.05,8.06,8.07,8.08,8.09,8.10,8.11,8.12,8.13,8.14,8.15,8.16},
{9.01,9.02,9.03,9.04,9.05,9.06,9.07,9.08,9.09,9.10,9.11,9.12,9.13,7.14,9.15,9.16},
{10.1,10.2,10.3,10.4,10.5,10.6,10.7,10.8,10.9,10.10,10.11,10.12,10.13,10.14,10.15,10.16},
{11.1,11.2,11.3,11.4,11.5,11.6,11.7,11.8,11.9,11.10,11.11,11.12,11.13,11.14,11.15,11.16},
{12.1,12.2,12.3,12.4,12.5,12.6,12.7,12.8,12.9,12.10,12.11,12.12,12.13,12.14,12.15,12.16},
{13.1,13.2,13.3,13.4,13.5,13.6,13.7,13.8,13.9,13.10,13.11,13.12,13.13,13.14,13.15,13.16},
{14.1,14.2,14.3,14.4,14.5,14.6,14.7,14.8,14.9,14.10,14.11,14.12,14.13,14.14,14.15,14.16},
{15.1,15.2,15.3,15.4,15.5,15.6,15.7,15.8,15.9,15.10,15.11,15.12,15.13,15.14,15.15,15.16},
{16.1,16.2,16.3,16.4,16.5,16.6,16.7,16.8,16.9,16.10,16.11,16.12,16.13,16.14,16.15,16.16}};
float tempps[8];
void printVecps(__m256 vec)
{
_mm256_store_ps(&tempps[0], vec);
printf(", [0]=%3.2f, [1]=%3.2f, [2]=%3.2f, [3]=%3.2f, [4]=%3.2f, [5]=%3.2f, [6]=%3.2f, [7]=%3.2f \n",
tempps[0],tempps[1],tempps[2],tempps[3],tempps[4],tempps[5],tempps[6],tempps[7]) ;
}
int main() {
__m256 vec1;
int vindexm [8]={0, Distance/2, Distance, Distance + Distance/2, Distance*2, Distance*2 +Distance/2, Distance*3, Distance*3 + Distance/2};
__m256i vindex = _mm256_load_si256((__m256i *) &vindexm[0]);
//loops
vec1 = _mm256_i32gather_ps (&a[0][0],vindex, 4);//place it in your loop as you want
printVecps(vec1);
return 0;
}
the out put is
[0]=1.01, [1]=1.11, [2]=2.01, [3]=2.11, [4]=3.01, [5]=3.11, [6]=4.01, [7]=4.11

Regular expression sequence matching

Is it possible to create a regular expression to find an incrementing sequence of hex numbers? I am trying to find number sequences (4 numbers long) inside seemingly random hex number strings.
... 59 fd 25 bf b1 b2 b3 b4 39 ca ...
... 35 c1 55 c4 c5 c6 c7 74 92 e1 ...
I was hoping to find the pattern b1 b2 b3 b4 in line 1 and c4 c5 c6 c7 in line 2.
Group matching will find same number sequences... /(\w\w)\1{3}/ will find c4 c4 c4 c4 but I haven't found a way to match the incrementing sequence.
Any ideas?
Regex is used for matching patterns occurring repeatedly not for matching patterns occurring incrementally
You better parse it with your own parser