How to sort a 2d array in cpp by values of two columns? - c++

I have a std::vector< std::vector<double> > array, the entries of which are
1 80 -0.15 -0.9 -0.15 0.6 0 -1.5
1 81 -0.15 -0.9 -0.15 0.7 0 -1.6
1 82 -0.15 -0.9 -0.15 0.8 0 -1.7
1 83 -0.15 -0.9 -0.15 0.9 0 -1.8
.
.
.
79 155 0.15 0.9 0.15 -0.9 0 1.8
79 156 0.15 0.9 0.15 -0.8 0 1.7
79 157 0.15 0.9 0.15 -0.7 0 1.6
79 158 0.15 0.9 0.15 -0.6 0 1.5
Each row has 8 elements. I want to sort the array by the 7th and 8th element using the std::sort function as
auto sortfunc = [](vector<double> va, vector<double> vb){ return (va[7] < vb[7] ) && (va[6]< vb[6] ); };
sort(array.begin(),array.end(), sortfunc );
The result is not a completely sorted array
3 153 -0.15 -0.7 0.1 -0.1 -0.25 -0.6
2 154 -0.15 -0.8 0.1 0 -0.25 -0.8
2 153 -0.15 -0.8 0.1 -0.1 -0.25 -0.7
2 152 -0.15 -0.8 0.1 -0.2 -0.25 -0.6
7 153 -0.1 -0.7 0.1 -0.1 -0.2 -0.6
7 154 -0.1 -0.7 0.1 0 -0.2 -0.7
.
.
.
74 94 0.1 0.8 -0.05 -0.5 0.15 1.3
74 95 0.1 0.8 -0.05 -0.4 0.15 1.2
74 96 0.1 0.8 -0.05 -0.3 0.15 1.1
74 97 0.1 0.8 -0.05 -0.2 0.15 1
77 100 0.15 0.7 -0.05 0.1 0.2 0.6
77 99 0.15 0.7 -0.05 0 0.2 0.7
This doesn't give me an array that is sorted by the given condition as the elements in 7th and 8th column doesn't appear in a particular order.
What am I doing wrong here?
Github Gist for the arrays is here

Your sort criteria looks off. I think you need something more like this:
auto sortfunc = [](std::vector<double> const& va, std::vector<double> const& vb)
{
if(va[7] == vb[7])
return va[6] < vb[6];
return va[7] < vb[7];
};
Sort by the first column unless the first column is equal in which case sort according to the second column.

sortfunc does not implement the requirements of Compare as it does not have a strict weak ordering. It therefore causes undefined behaviour when used with std::sort.
If you want to compare multiple values the easiest way is to use std::tuple which automatically compares the first value then only compares the second value if the first matches:
auto sortfunc = [](vector<double> va, vector<double> vb){ return std::tie(va[7], va[8]) < std::tie(vb[7], vb[8]); };

Related

Why does the c++ clock return smaller values after sleeping?

I am trying to measure the performance of parts of my code in order to compare different statistical methods. I noticed that the measured cpu time is significantly lower if I let the thread sleep for some time beforehand. What is going on there? Am I using clock() wrong?
I am on an ubuntu system and using mpic++.
#include <ctime>
#include <chrono>
#include <cmath>
#include <random>
#include <iostream>
#include <thread>
int main(){
//If I include this line then the measured time is 10 times smaller
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
std::default_random_engine generator;
std::normal_distribution<double> distribution = std::normal_distribution<double>(0.0,1.0);
int M= 100000;
double test = 0;
clock_t start = clock();
for(int counter=0;counter<M;counter++){
test+=distribution(generator);
}
clock_t end = clock();
std::cout << "Generated "<<M<<" values in "<<((double) (end - start)) / CLOCKS_PER_SEC<<std::endl;
std::cout<<test;
return 0;
}
If I let the thread sleep then I get:
Generated 100000 values in 0.01637
Otherwise the result is:
Generated 100000 values in 0.134786
Strace results with std::this_thread::sleep_for(std::chrono::milliseconds(1000));:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
99.21 0.455606 455606 1 nanosleep
0.51 0.002321 18 130 read
0.06 0.000272 27 10 brk
0.04 0.000203 1 241 mmap
0.04 0.000176 2 101 11 openat
0.03 0.000117 1 178 mprotect
0.03 0.000115 1 90 close
0.02 0.000112 19 6 sched_getaffinity
0.02 0.000111 1 93 fstat
0.02 0.000072 8 9 clone
0.01 0.000053 27 2 prlimit64
0.01 0.000039 20 2 clock_gettime
0.01 0.000028 28 1 getpid
0.01 0.000028 1 24 1 futex
0.00 0.000000 0 2 write
0.00 0.000000 0 8 8 stat
0.00 0.000000 0 15 munmap
0.00 0.000000 0 2 rt_sigaction
0.00 0.000000 0 1 rt_sigprocmask
0.00 0.000000 0 78 78 access
0.00 0.000000 0 1 execve
0.00 0.000000 0 4 getdents
0.00 0.000000 0 1 arch_prctl
0.00 0.000000 0 1 set_tid_address
0.00 0.000000 0 1 set_robust_list
0.00 0.000000 0 1 getrandom
------ ----------- ----------- --------- --------- ----------------
100.00 0.459253 1003 98 total
Result without std::this_thread::sleep_for(std::chrono::milliseconds(1000));:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
32.00 0.002080 16 130 read
20.23 0.001315 5 241 mmap
14.00 0.000910 5 178 mprotect
10.15 0.000660 7 101 11 openat
5.41 0.000352 5 78 78 access
4.66 0.000303 3 93 fstat
4.43 0.000288 3 90 close
2.81 0.000183 20 9 clone
1.82 0.000118 10 12 1 futex
1.08 0.000070 5 15 munmap
0.80 0.000052 5 10 brk
0.62 0.000040 7 6 sched_getaffinity
0.57 0.000037 19 2 write
0.43 0.000028 4 8 8 stat
0.38 0.000025 13 2 clock_gettime
0.18 0.000012 3 4 getdents
0.15 0.000010 5 2 prlimit64
0.14 0.000009 9 1 getpid
0.03 0.000002 1 2 rt_sigaction
0.03 0.000002 2 1 arch_prctl
0.03 0.000002 2 1 getrandom
0.02 0.000001 1 1 rt_sigprocmask
0.02 0.000001 1 1 set_tid_address
0.02 0.000001 1 1 set_robust_list
0.00 0.000000 0 1 execve
------ ----------- ----------- --------- --------- ----------------
100.00 0.006501 990 98 total
I found the culprit. I am working in eclipse and had a header with corresponding cpp file still in the same project. TIL that eclipse links files even if I do not include them. This file contains a class with a variable of type dealii::FullMatrix:
//Coefficients.h
#ifndef COEFFICIENTS_H_
#define COEFFICIENTS_H_
#include <deal.II/lac/full_matrix.h>
class Coefficients{
public:
Coefficients(int dim);
protected:
dealii::FullMatrix<double> values;
};
#endif /* COEFFICIENTS_H_ */
In the cpp file the constructor initializes the matrix:
//Coefficients.cpp
#include "Coefficients.h"
Coefficients::Coefficients(int dim):values(dim,dim){};
This somehow resulted in the time difference. For now I just put the constructor in my header file and that seems to solve the issue. I would be very interested if any of you know whats going on there.
Thank you for the discussion and all the interesting answers. A special thanks to n.m. for testing my program.

Finding the Mode in a Vector of Floats

I am trying to find the mode average in a vector containing 324 float values.
The code I have is as follows:
float max = vec.back();
float prev = max;
float mode = 0.0;
int maxcount = 0;
int currcount = 0;
for (const auto n : vec) {
if (n == prev) {
++currcount;
if (currcount > maxcount) {
maxcount = currcount;
mode = n;
}
} else {
currcount = 1;
}
prev = n;
}
std::cout << mode << std::endl
This prints out the mode to be 0.75, which is wrong.
Here are all the float values, they come from a txt file so please excuse the format:
0.61 0.61 0.61 0.62 0.62 0.62 0.62 0.62 0.62 0.62 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.68 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.73 0.73 0.73 0.73 0.73 0.73 0.73 0.73 0.73 0.73 0.73 0.73 0.73 0.74 0.74 0.74 0.74 0.74 0.74 0.74 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.76 0.76 0.76 0.76 0.76 0.76 0.76 0.76 0.76 0.77 0.77 0.77 0.77 0.77 0.77 0.77 0.78 0.78 0.78 0.78 0.78 0.78 0.78 0.78 0.78 0.78 0.79 0.79 0.79 0.79 0.79 0.79 0.79 0.79 0.79 0.79 0.79
Excel presents the mode as 0.65. Why does my code not produce the same result? What do I need to change?
Many thanks.
edit: I have found through debugging the values within vec are more like; 0.68000000000000005, 0.69999999999999996, though some are still only two decimal points (0.64, 0.74 etc). Could this be the issue? Am I able to round up the values for this particular calc?
The problem might be the use of floats for comparison. Because of how they are stored, floating point numbers differ, in general, from the value they are initialized to by a small amount.
Instead of using n == prev, consider a comparison within some small epsilon that is greater than the machine precision (for any machine you expect to run this code on) but less than the smallest true difference between any of your two numbers (which looks like 0.01). So you could do
if (((n - prev) < EPSILON) && ((prev - n) < EPSILON)) { ...`
for float EPSILON = 0.000001, or a value that makes sense for you.
See also this question on comparing floats. Of note is that the ideal epsilon would change if your data set changed to much larger or much smaller numbers.
Even if there is another problem in your code, you might consider moving away from comparing floats in general.
By debugging I found that my values were not just two decimal place values, therefore, the mean average was actually 0.7500000000004, but was still being printed as 0.75.
By adding a rounding function call, and removing the const I was able to find the mean to two decimal places.
for (auto n : vec)
{
n = roundf(n * 100) / 100;
if (n == prev)
{
++currcount;
if (currcount > maxcount)
{
maxcount = currcount;
mode = n;
}
} else
{
currcount = 1;
}
prev = n;
}

Skipping multiple numbers (not hard coded)

I am trying to use Regex for a project I am doing for work.
I have a set of numbers that looks like this:
23 14 62 -121 98 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
24 13 64 -118 101 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
19 10 65 -124 93 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
19 11 62 -130 93 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
16 2 65 -127 83 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
18 1 68 -127 86 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
29 -1 64 -129 92 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
22 2 63 -131 87 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
16 13 62 -130 91 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
15 6 66 -131 87 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
16 2 62 -137 80 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
16 -5 63 -133 74 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
24 -1 60 -135 83 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
15 11 59 -137 86 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
11 8 64 -131 83 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
19 10 64 -130 92 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
20 11 65 -136 96 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
20 8 59 -136 87 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
18 13 59 -135 90 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
18 10 60 -138 88 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
23 6 60 -133 88 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
20 10 57 -127 87 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
23 4 61 -127 88 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
14 -3 63 -131 75 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
16 -5 62 -129 73 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
25 -6 62 -127 80 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
21 2 60 -129 83 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
14 3 65 -133 81 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
14 8 64 -132 86 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
19 11 59 -131 89 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
28 5 59 -129 93 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
29 -3 56 -130 82 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
24 0 58 -128 82 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
28 12 65 -128 104 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
25 4 65 -123 94 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
17 -1 61 -126 77 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
18 2 62 -130 82 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
I want to get the 10th number in each row (165) with only one or two regex statements. The number changes occasionally from 165 so that I am not able to hard code it.
So far I have:
([+-]{0,1}[0-9]{1,5})
([+-]{0,1}[0-9]{1,5})
([+-]{0,1}[0-9]{1,5})
([+-]{0,1}[0-9]{1,5})
([+-]{0,1}[0-9]{1,5})
([+-]{0,1}[0-9]{1,}[.]{0,1}[0-9]{0,5})|([+-]{0,1}[.]{1,1}(?=[0-9])[0-9]{0,5}))
([+-]{0,1}[0-9]{1,}[.]{0,1}[0-9]{0,5})|([+-]{0,1}[.]{1,1}(?=[0-9])[0-9]{0,5}))
([+-]{0,1}[0-9]{1,}[.]{0,1}[0-9]{0,5})|([+-]{0,1}[.]{1,1}(?=[0-9])[0-9]{0,5}))
([+-]{0,1}[0-9]{1,}[.]{0,1}[0-9]{0,5})|([+-]{0,1}[.]{1,1}(?=[0-9])[0-9]{0,5}))
([+-]{0,1}[0-9]{1,5})
Which is obviously not 1 or 2 steps it's 10 plus it gives me 9 points I don't want.
This problem has been fixed but now a new problem has arisen:
I thought that:
(?#<INS:5>)
^.{53}([+-]?\d+)
\.\.\.\. \.\.\.
(?#<INS:5>)
^.{53}([+-]?\d+)
\.\.\.\. \.\.\.
(?#<INS:5>)
^.{53}([+-]?\d+)
\.\.\.\. \.\.\.
fixed my problem but it turns out this code breaks in the following situation:
9486 9 68 -133 9562 -0.0 -0.1 0.0 -0.2 106 60.00 .... ...
9455 3 63 -129 9521 -0.0 -0.1 0.0 -0.2 106 60.00 .... ...
9417 3 64 -132 9485 -0.0 -0.1 0.0 -0.2 89 60.00 .... ...
9367 3 60 -129 9431 -0.0 -0.1 0.0 -0.2 89 60.00 .... ...
9305 12 56 -131 9373 -0.0 -0.1 0.0 -0.2 89 60.00 .... ...
9237 12 66 -135 9315 -0.0 -0.1 0.0 -0.2 89 60.00 .... ...
9170 2 65 -129 9238 -0.0 -0.1 0.0 -0.2 89 60.00 .... ...
9111 4 62 -127 9177 -0.0 -0.1 0.0 -0.2 89 60.00 .... ...
9041 -0 58 -126 9099 -0.0 -0.1 0.0 -0.2 89 60.00 .... ...
8969 6 57 -129 9032 -0.0 -0.1 0.0 -0.2 89 60.00 .... ...
8887 9 60 -132 8956 -0.0 -0.1 0.0 -0.2 73 60.00 .... ...
8802 5 62 -131 8869 -0.0 -0.1 0.0 -0.2 73 60.00 .... ...
8720 1 64 -132 8785 -0.0 -0.1 0.0 -0.2 73 60.00 .... ...
8634 9 66 -137 8710 -0.0 -0.1 0.0 -0.2 73 60.00 .... ...
When the 10th number drops below 100 the code fails. Is there any way to make this where it would not break for 10's and 1's?
You can try adding [ ]* to the regex. This should grab 106 or 89. All it does is grabs an extra space if it exists before the number.
(?#<INS:5>)
^.{53}[ ]*([+-]?\d+)
\.\.\.\. \.\.\.
(?#<INS:5>)
^.{53}[ ]*([+-]?\d+)
\.\.\.\. \.\.\.
(?#<INS:5>)
^.{53}[ ]*([+-]?\d+)
\.\.\.\. \.\.\.
this worked for me, just using one string as an example.
var someString="9486 9 68 -133 9562 -0.0 -0.1 0.0 -0.2 106 60.00 0909";
var getum =someString.match(/[\-\+]?[0-9]*\.?[0-9]+/g);
getnum[9] // gives 106
note: using javascript

Skipping multiple numbers

I am trying to use Regex for a project I am doing for work.
I have a set of numbers that looks like this:
23 14 62 -121 98 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
24 13 64 -118 101 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
19 10 65 -124 93 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
19 11 62 -130 93 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
16 2 65 -127 83 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
18 1 68 -127 86 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
29 -1 64 -129 92 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
22 2 63 -131 87 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
16 13 62 -130 91 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
15 6 66 -131 87 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
16 2 62 -137 80 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
16 -5 63 -133 74 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
24 -1 60 -135 83 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
15 11 59 -137 86 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
11 8 64 -131 83 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
19 10 64 -130 92 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
20 11 65 -136 96 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
20 8 59 -136 87 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
18 13 59 -135 90 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
18 10 60 -138 88 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
23 6 60 -133 88 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
20 10 57 -127 87 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
23 4 61 -127 88 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
14 -3 63 -131 75 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
16 -5 62 -129 73 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
25 -6 62 -127 80 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
21 2 60 -129 83 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
14 3 65 -133 81 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
14 8 64 -132 86 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
19 11 59 -131 89 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
28 5 59 -129 93 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
29 -3 56 -130 82 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
24 0 58 -128 82 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
28 12 65 -128 104 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
25 4 65 -123 94 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
17 -1 61 -126 77 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
18 2 62 -130 82 -0.0 -0.1 0.0 -0.2 165 60.00 .... ...
I want to get the 10th number in each row (165) with only one or two regex statements. The number changes occasionally from 165 so that I am not able to hard code it.
So far I have:
([+-]{0,1}[0-9]{1,5})
([+-]{0,1}[0-9]{1,5})
([+-]{0,1}[0-9]{1,5})
([+-]{0,1}[0-9]{1,5})
([+-]{0,1}[0-9]{1,5})
([+-]{0,1}[0-9]{1,}[.]{0,1}[0-9]{0,5})|([+-]{0,1}[.]{1,1}(?=[0-9])[0-9]{0,5}))
([+-]{0,1}[0-9]{1,}[.]{0,1}[0-9]{0,5})|([+-]{0,1}[.]{1,1}(?=[0-9])[0-9]{0,5}))
([+-]{0,1}[0-9]{1,}[.]{0,1}[0-9]{0,5})|([+-]{0,1}[.]{1,1}(?=[0-9])[0-9]{0,5}))
([+-]{0,1}[0-9]{1,}[.]{0,1}[0-9]{0,5})|([+-]{0,1}[.]{1,1}(?=[0-9])[0-9]{0,5}))
([+-]{0,1}[0-9]{1,5})
Which is obviously not 1 or 2 steps it's 10 plus it gives me 9 points I don't want.
Is there a way to get this in only 1 or 2 statements?
If all the columns in all the rows are a fixed width, you could just do this:
^.{55}(\d+)
Note this is designed to match numbers like "165" with no sign or fractional component. This might be more flexible:
^.{55}([+-]?\d+(?:.\d+)?)
Edit: If you don't have to use regex, but can use Awk for example, you can make it much simpler:
cat file.dat | awk '{print $10}'
Here's my variation:
^(?:[\S-\.]+\s+){9}(\S+)
Your number (i.e. 165) will be then stored in the first capture group (i.e. $1, depending on the language you're using).
Explained:
^ // Beginning of the line
(?: // Do not capture this group (we're not interested in it)
\S+ // Any non-space character - one or more times
\s+ // Followed by a white-space character (one or more)
){9} // Repeat the above nine times
(\S+) // Any non-white space characters (our tenth number)
Usage example in Perl:
cat file.dat | perl -ne 'print $1 if s/(?:\S+\s+){9}(\S+)//'
^\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+(\S+)
Or, even better (thanks, #SamIAm!):
^(\S+\s+){9}(\S+)
I've come up with this regex
/^.*?(?:[\d\.]+[^\d\.]+?){9}([\d\.-]+)/
http://rubular.com/r/0cwhaL92aw
You didn't specify what language/tools you're using, so here's a Java example:
// Split the input into each row:
final String rows[ ] = INPUT.split( "\\n" );
// Iterate through each row:
for ( final String row : rows )
{
// Split the row into components separated by spaces:
final String components[ ] = row.split( "\\s+" );
assert components.length < 9 : "There is no 10th number!";
final double number = Double.parseDouble( components[ 9 ] );
}

Extracting specific lines of data from a log file

I'm looking to extract and print a specific line from a table I have in a long log file. It looks something like this:
******************************************************************************
XSCALE (VERSION July 4, 2012) 4-Jun-2013
******************************************************************************
Author: Wolfgang Kabsch
Copy licensed until 30-Jun-2013 to
academic users for non-commercial applications
No redistribution.
******************************************************************************
CONTROL CARDS
******************************************************************************
MAXIMUM_NUMBER_OF_PROCESSORS=16
RESOLUTION_SHELLS= 20 10 6 4 3 2.5 2.0 1.9 1.8 1.7 1.6 1.5 1.4 1.3 1.2 1.1 1.0 0.9 0.8
MINIMUM_I/SIGMA=4.0
OUTPUT_FILE=fae-ip.ahkl
INPUT_FILE= /dls/sci-scratch/Sam/FC59251/fr6_1/XDS_ASCII.HKL
THE DATA COLLECTION STATISTICS REPORTED BELOW ASSUMES:
SPACE_GROUP_NUMBER= 97
UNIT_CELL_CONSTANTS= 128.28 128.28 181.47 90.000 90.000 90.000
***** 16 EQUIVALENT POSITIONS IN SPACE GROUP # 97 *****
If x',y',z' is an equivalent position to x,y,z, then
x'=x*ML(1)+y*ML( 2)+z*ML( 3)+ML( 4)/12.0
y'=x*ML(5)+y*ML( 6)+z*ML( 7)+ML( 8)/12.0
z'=x*ML(9)+y*ML(10)+z*ML(11)+ML(12)/12.0
# 1 2 3 4 5 6 7 8 9 10 11 12
1 1 0 0 0 0 1 0 0 0 0 1 0
2 -1 0 0 0 0 -1 0 0 0 0 1 0
3 -1 0 0 0 0 1 0 0 0 0 -1 0
4 1 0 0 0 0 -1 0 0 0 0 -1 0
5 0 1 0 0 1 0 0 0 0 0 -1 0
6 0 -1 0 0 -1 0 0 0 0 0 -1 0
7 0 -1 0 0 1 0 0 0 0 0 1 0
8 0 1 0 0 -1 0 0 0 0 0 1 0
9 1 0 0 6 0 1 0 6 0 0 1 6
10 -1 0 0 6 0 -1 0 6 0 0 1 6
11 -1 0 0 6 0 1 0 6 0 0 -1 6
12 1 0 0 6 0 -1 0 6 0 0 -1 6
13 0 1 0 6 1 0 0 6 0 0 -1 6
14 0 -1 0 6 -1 0 0 6 0 0 -1 6
15 0 -1 0 6 1 0 0 6 0 0 1 6
16 0 1 0 6 -1 0 0 6 0 0 1 6
ALL DATA SETS WILL BE SCALED TO /dls/sci-scratch/Sam/FC59251/fr6_1/XDS_ASCII.HKL
******************************************************************************
READING INPUT REFLECTION DATA FILES
******************************************************************************
DATA MEAN REFLECTIONS INPUT FILE NAME
SET# INTENSITY ACCEPTED REJECTED
1 0.1358E+03 1579957 0 /dls/sci-scratch/Sam/FC59251/fr6_1/XDS_ASCII.HKL
******************************************************************************
CORRECTION FACTORS AS FUNCTION OF IMAGE NUMBER & RESOLUTION
******************************************************************************
RECIPROCAL CORRECTION FACTORS FOR INPUT DATA SETS MERGED TO
OUTPUT FILE: fae-ip.ahkl
THE CALCULATIONS ASSUME FRIEDEL'S_LAW= TRUE
TOTAL NUMBER OF CORRECTION FACTORS DEFINED 720
DEGREES OF FREEDOM OF CHI^2 FIT 357222.9
CHI^2-VALUE OF FIT OF CORRECTION FACTORS 1.024
NUMBER OF CYCLES CARRIED OUT 4
CORRECTION FACTORS for visual inspection by XDS-Viewer DECAY_001.cbf
XMIN= 0.6 XMAX= 1799.3 NXBIN= 36
YMIN= 0.00049 YMAX= 0.44483 NYBIN= 20
NUMBER OF REFLECTIONS USED FOR DETERMINING CORRECTION FACTORS 396046
******************************************************************************
CORRECTION FACTORS AS FUNCTION OF X (fast) & Y(slow) IN THE DETECTOR PLANE
******************************************************************************
RECIPROCAL CORRECTION FACTORS FOR INPUT DATA SETS MERGED TO
OUTPUT FILE: fae-ip.ahkl
THE CALCULATIONS ASSUME FRIEDEL'S_LAW= TRUE
TOTAL NUMBER OF CORRECTION FACTORS DEFINED 7921
DEGREES OF FREEDOM OF CHI^2 FIT 356720.6
CHI^2-VALUE OF FIT OF CORRECTION FACTORS 1.023
NUMBER OF CYCLES CARRIED OUT 3
CORRECTION FACTORS for visual inspection by XDS-Viewer MODPIX_001.cbf
XMIN= 5.4 XMAX= 2457.6 NXBIN= 89
YMIN= 40.0 YMAX= 2516.7 NYBIN= 89
NUMBER OF REFLECTIONS USED FOR DETERMINING CORRECTION FACTORS 396046
******************************************************************************
CORRECTION FACTORS AS FUNCTION OF IMAGE NUMBER & DETECTOR SURFACE POSITION
******************************************************************************
RECIPROCAL CORRECTION FACTORS FOR INPUT DATA SETS MERGED TO
OUTPUT FILE: fae-ip.ahkl
THE CALCULATIONS ASSUME FRIEDEL'S_LAW= TRUE
TOTAL NUMBER OF CORRECTION FACTORS DEFINED 468
DEGREES OF FREEDOM OF CHI^2 FIT 357286.9
CHI^2-VALUE OF FIT OF CORRECTION FACTORS 1.022
NUMBER OF CYCLES CARRIED OUT 3
CORRECTION FACTORS for visual inspection by XDS-Viewer ABSORP_001.cbf
XMIN= 0.6 XMAX= 1799.3 NXBIN= 36
DETECTOR_SURFACE_POSITION= 1232 1278
DETECTOR_SURFACE_POSITION= 1648 1699
DETECTOR_SURFACE_POSITION= 815 1699
DETECTOR_SURFACE_POSITION= 815 858
DETECTOR_SURFACE_POSITION= 1648 858
DETECTOR_SURFACE_POSITION= 2174 1673
DETECTOR_SURFACE_POSITION= 1622 2230
DETECTOR_SURFACE_POSITION= 841 2230
DETECTOR_SURFACE_POSITION= 289 1673
DETECTOR_SURFACE_POSITION= 289 884
DETECTOR_SURFACE_POSITION= 841 326
DETECTOR_SURFACE_POSITION= 1622 326
DETECTOR_SURFACE_POSITION= 2174 884
NUMBER OF REFLECTIONS USED FOR DETERMINING CORRECTION FACTORS 396046
******************************************************************************
CORRECTION PARAMETERS FOR THE STANDARD ERROR OF REFLECTION INTENSITIES
******************************************************************************
The variance v0(I) of the intensity I obtained from counting statistics is
replaced by v(I)=a*(v0(I)+b*I^2). The model parameters a, b are chosen to
minimize the discrepancies between v(I) and the variance estimated from
sample statistics of symmetry related reflections. This model implicates
an asymptotic limit ISa=1/SQRT(a*b) for the highest I/Sigma(I) that the
experimental setup can produce (Diederichs (2010) Acta Cryst D66, 733-740).
Often the value of ISa is reduced from the initial value ISa0 due to systematic
errors showing up by comparison with other data sets in the scaling procedure.
(ISa=ISa0=-1 if v0 is unknown for a data set.)
a b ISa ISa0 INPUT DATA SET
1.086E+00 1.420E-03 25.46 29.00 /dls/sci-scratch/Sam/FC59251/fr6_1/XDS_ASCII.HKL
FACTOR TO PLACE ALL DATA SETS TO AN APPROXIMATE ABSOLUTE SCALE 0.4178E+04
(ASSUMING A PROTEIN WITH 50% SOLVENT)
******************************************************************************
STATISTICS OF SCALED OUTPUT DATA SET : fae-ip.ahkl
FILE TYPE: XDS_ASCII MERGE=FALSE FRIEDEL'S_LAW=TRUE
186 OUT OF 1579957 REFLECTIONS REJECTED
1579771 REFLECTIONS ON OUTPUT FILE
******************************************************************************
DEFINITIONS:
R-FACTOR
observed = (SUM(ABS(I(h,i)-I(h))))/(SUM(I(h,i)))
expected = expected R-FACTOR derived from Sigma(I)
COMPARED = number of reflections used for calculating R-FACTOR
I/SIGMA = mean of intensity/Sigma(I) of unique reflections
(after merging symmetry-related observations)
Sigma(I) = standard deviation of reflection intensity I
estimated from sample statistics
R-meas = redundancy independent R-factor (intensities)
Diederichs & Karplus (1997), Nature Struct. Biol. 4, 269-275.
CC(1/2) = percentage of correlation between intensities from
random half-datasets. Correlation significant at
the 0.1% level is marked by an asterisk.
Karplus & Diederichs (2012), Science 336, 1030-33
Anomal = percentage of correlation between random half-sets
Corr of anomalous intensity differences. Correlation
significant at the 0.1% level is marked.
SigAno = mean anomalous difference in units of its estimated
standard deviation (|F(+)-F(-)|/Sigma). F(+), F(-)
are structure factor estimates obtained from the
merged intensity observations in each parity class.
Nano = Number of unique reflections used to calculate
Anomal_Corr & SigAno. At least two observations
for each (+ and -) parity are required.
SUBSET OF INTENSITY DATA WITH SIGNAL/NOISE >= -3.0 AS FUNCTION OF RESOLUTION
RESOLUTION NUMBER OF REFLECTIONS COMPLETENESS R-FACTOR R-FACTOR COMPARED I/SIGMA R-meas CC(1/2) Anomal SigAno Nano
LIMIT OBSERVED UNIQUE POSSIBLE OF DATA observed expected Corr
20.00 557 66 74 89.2% 2.7% 3.0% 557 58.75 2.9% 100.0* 45 1.674 25
10.00 5018 417 417 100.0% 2.4% 3.1% 5018 75.34 2.6% 100.0* 2 0.812 276
6.00 18352 1583 1584 99.9% 2.8% 3.3% 18351 65.55 2.9% 100.0* 11* 0.914 1248
4.00 59691 4640 4640 100.0% 3.2% 3.5% 59690 64.96 3.4% 100.0* 4 0.857 3987
3.00 112106 8821 8822 100.0% 4.4% 4.4% 112102 50.31 4.6% 99.9* -3 0.844 7906
2.50 147954 11023 11023 100.0% 8.7% 8.6% 147954 29.91 9.1% 99.8* 0 0.829 10096
2.00 332952 24698 24698 100.0% 21.4% 21.6% 332949 14.32 22.3% 99.2* 1 0.804 22992
1.90 106645 8382 8384 100.0% 56.5% 57.1% 106645 5.63 58.8% 94.7* -2 0.767 7886
1.80 138516 10342 10343 100.0% 86.8% 87.0% 138516 3.64 90.2% 87.9* -2 0.762 9741
1.70 175117 12897 12899 100.0% 140.0% 140.1% 175116 2.15 145.4% 69.6* -2 0.732 12188
1.60 209398 16298 16304 100.0% 206.1% 208.5% 209397 1.35 214.6% 48.9* -2 0.693 15466
1.50 273432 20770 20893 99.4% 333.4% 342.1% 273340 0.80 346.9% 23.2* -1 0.644 19495
1.40 33 27 27248 0.1% 42.6% 112.7% 12 0.40 60.3% 88.2 0 0.000 0
1.30 0 0 36205 0.0% -99.9% -99.9% 0 -99.00 -99.9% 0.0 0 0.000 0
1.20 0 0 49238 0.0% -99.9% -99.9% 0 -99.00 -99.9% 0.0 0 0.000 0
1.10 0 0 68746 0.0% -99.9% -99.9% 0 -99.00 -99.9% 0.0 0 0.000 0
1.00 0 0 98884 0.0% -99.9% -99.9% 0 -99.00 -99.9% 0.0 0 0.000 0
0.90 0 0 147505 0.0% -99.9% -99.9% 0 -99.00 -99.9% 0.0 0 0.000 0
0.80 0 0 230396 0.0% -99.9% -99.9% 0 -99.00 -99.9% 0.0 0 0.000 0
total 1579771 119964 778303 15.4% 12.8% 13.1% 1579647 14.33 13.4% 99.9* -1 0.755 111306
========== STATISTICS OF INPUT DATA SET ==========
R-FACTORS FOR INTENSITIES OF DATA SET /dls/sci-scratch/Sam/FC59251/fr6_1/XDS_ASCII.HKL
RESOLUTION R-FACTOR R-FACTOR COMPARED
LIMIT observed expected
20.00 2.7% 3.0% 557
10.00 2.4% 3.1% 5018
6.00 2.8% 3.3% 18351
4.00 3.2% 3.5% 59690
3.00 4.4% 4.4% 112102
2.50 8.7% 8.6% 147954
2.00 21.4% 21.6% 332949
1.90 56.5% 57.1% 106645
1.80 86.8% 87.0% 138516
1.70 140.0% 140.1% 175116
1.60 206.1% 208.5% 209397
1.50 333.4% 342.1% 273340
1.40 42.6% 112.7% 12
1.30 -99.9% -99.9% 0
1.20 -99.9% -99.9% 0
1.10 -99.9% -99.9% 0
1.00 -99.9% -99.9% 0
0.90 -99.9% -99.9% 0
0.80 -99.9% -99.9% 0
total 12.8% 13.1% 1579647
******************************************************************************
WILSON STATISTICS OF SCALED DATA SET: fae-ip.ahkl
******************************************************************************
Data is divided into resolution shells and a straight line
A - 2*B*SS is fitted to log<I>, where
RES = mean resolution (Angstrom) in shell
SS = mean of (sin(THETA)/LAMBDA)**2 in shell
<I> = mean reflection intensity in shell
BO = (A - log<I>)/(2*SS)
# = number of reflections in resolution shell
WILSON LINE (using all data) : A= 14.997 B= 29.252 CORRELATION= 0.99
# RES SS <I> log(<I>) BO
1667 8.445 0.004 2.3084E+06 14.652 49.2
2798 5.260 0.009 1.5365E+06 14.245 41.6
3547 4.106 0.015 2.0110E+06 14.514 16.3
4147 3.480 0.021 1.2910E+06 14.071 22.4
4688 3.073 0.026 7.3586E+05 13.509 28.1
5154 2.781 0.032 4.6124E+05 13.042 30.3
5568 2.560 0.038 3.1507E+05 12.661 30.6
5966 2.384 0.044 2.4858E+05 12.424 29.2
6324 2.240 0.050 1.8968E+05 12.153 28.5
6707 2.119 0.056 1.3930E+05 11.844 28.3
7030 2.016 0.062 9.1378E+04 11.423 29.0
7331 1.926 0.067 5.4413E+04 10.904 30.4
7664 1.848 0.073 3.5484E+04 10.477 30.9
7934 1.778 0.079 2.4332E+04 10.100 31.0
8193 1.716 0.085 1.8373E+04 9.819 30.5
8466 1.660 0.091 1.4992E+04 9.615 29.7
8743 1.609 0.097 1.1894E+04 9.384 29.1
9037 1.562 0.102 9.4284E+03 9.151 28.5
9001 1.520 0.108 8.3217E+03 9.027 27.6
HIGHER ORDER MOMENTS OF WILSON DISTRIBUTION OF CENTRIC DATA
AS COMPARED WITH THEORETICAL VALUES. (EXPECTED: 1.00)
# RES <I**2>/ <I**3>/ <I**4>/
3<I>**2 15<I>**3 105<I>**4
440 8.445 0.740 0.505 0.294
442 5.260 0.762 0.733 0.735
442 4.106 0.888 0.788 0.717
439 3.480 1.339 1.733 2.278
438 3.073 1.168 1.259 1.400
440 2.781 1.215 1.681 2.269
438 2.560 1.192 1.603 2.405
450 2.384 1.117 1.031 0.891
432 2.240 1.214 1.567 2.173
438 2.119 0.972 0.992 0.933
445 2.016 1.029 1.019 0.986
441 1.926 1.603 1.701 1.554
440 1.848 1.544 1.871 2.076
436 1.778 0.927 0.661 0.435
444 1.716 1.134 1.115 1.197
440 1.660 1.271 1.618 2.890
436 1.609 1.424 1.045 0.941
448 1.562 1.794 1.447 1.423
426 1.520 2.517 1.496 2.099
8355 overall 1.253 1.255 1.455
HIGHER ORDER MOMENTS OF WILSON DISTRIBUTION OF ACENTRIC DATA
AS COMPARED WITH THEORETICAL VALUES. (EXPECTED: 1.00)
# RES <I**2>/ <I**3>/ <I**4>/
2<I>**2 6<I>**3 24<I>**4
1227 8.445 1.322 1.803 2.340
2356 5.260 1.167 1.420 1.789
3105 4.106 1.010 1.046 1.100
3708 3.480 1.055 1.262 1.592
4250 3.073 0.999 1.083 1.375
4714 2.781 1.061 1.232 1.591
5130 2.560 1.049 1.178 1.440
5516 2.384 1.025 1.117 1.290
5892 2.240 1.001 1.058 1.230
6269 2.119 1.060 1.140 1.233
6585 2.016 1.109 1.344 1.709
6890 1.926 1.028 1.100 1.222
7224 1.848 1.060 1.150 1.348
7498 1.778 1.143 1.309 1.655
7749 1.716 1.182 1.299 1.549
8026 1.660 1.286 1.376 1.538
8307 1.609 1.419 1.481 1.707
8589 1.562 1.663 1.750 2.119
8575 1.520 2.271 2.172 5.088
111610 overall 1.253 1.354 1.804
======= CUMULATIVE INTENSITY DISTRIBUTION =======
DEFINITIONS:
<I> = mean reflection intensity
Na(Z)exp = expected number of acentric reflections with I <= Z*<I>
Na(Z)obs = observed number of acentric reflections with I <= Z*<I>
Nc(Z)exp = expected number of centric reflections with I <= Z*<I>
Nc(Z)obs = observed number of centric reflections with I <= Z*<I>
Nc(Z)obs/Nc(Z)exp versus resolution and Z (0.1-1.0)
# RES 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
440 8.445 0.75 0.95 0.98 1.00 0.98 0.99 1.00 1.00 1.02 1.02
442 5.260 1.18 1.11 1.09 1.09 1.07 1.08 1.08 1.08 1.07 1.06
442 4.106 0.97 1.01 0.98 0.97 0.96 0.94 0.92 0.91 0.92 0.94
439 3.480 0.91 0.88 0.91 0.91 0.89 0.90 0.90 0.89 0.89 0.93
438 3.073 0.92 0.92 0.90 0.93 0.94 0.99 1.02 0.99 0.96 0.96
440 2.781 0.98 1.01 1.02 1.05 1.04 1.03 1.04 1.02 1.01 1.01
438 2.560 1.02 1.10 1.05 1.03 1.01 1.03 1.04 1.01 1.04 1.02
450 2.384 0.78 0.93 0.92 0.93 0.89 0.89 0.92 0.95 0.96 0.95
432 2.240 0.69 0.82 0.84 0.86 0.91 0.92 0.93 0.94 0.95 0.95
438 2.119 0.75 0.87 0.95 1.02 1.09 1.09 1.12 1.12 1.10 1.08
445 2.016 0.86 0.86 0.87 0.90 0.91 0.93 0.98 0.99 1.00 1.00
441 1.926 0.88 0.79 0.79 0.81 0.82 0.84 0.85 0.85 0.86 0.86
440 1.848 1.00 0.89 0.85 0.83 0.85 0.85 0.88 0.90 0.90 0.92
436 1.778 1.03 0.87 0.79 0.79 0.80 0.84 0.85 0.87 0.90 0.92
444 1.716 1.09 0.85 0.81 0.78 0.80 0.80 0.81 0.81 0.84 0.85
440 1.660 1.27 1.01 0.93 0.88 0.85 0.84 0.84 0.85 0.88 0.91
436 1.609 1.34 1.00 0.89 0.83 0.80 0.80 0.80 0.81 0.80 0.83
448 1.562 1.39 1.09 0.93 0.86 0.81 0.78 0.77 0.79 0.78 0.78
426 1.520 1.38 1.03 0.88 0.83 0.82 0.80 0.78 0.76 0.75 0.74
8355 overall 1.01 0.95 0.92 0.91 0.91 0.91 0.92 0.92 0.93 0.93
Na(Z)obs/Na(Z)exp versus resolution and Z (0.1-1.0)
# RES 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
1227 8.445 1.10 1.22 1.21 1.21 1.14 1.10 1.12 1.10 1.11 1.09
2356 5.260 1.15 1.10 1.09 1.03 1.03 1.03 1.01 1.01 1.01 1.00
3105 4.106 0.91 0.96 0.99 1.01 1.02 1.00 1.00 0.99 0.99 1.00
3708 3.480 0.93 0.97 1.00 1.06 1.05 1.04 1.04 1.04 1.04 1.05
4250 3.073 0.94 1.02 1.01 1.00 1.01 1.00 1.00 1.01 1.02 1.02
4714 2.781 1.11 1.04 1.02 1.02 1.02 1.01 1.01 1.01 1.00 1.00
5130 2.560 1.00 1.10 1.06 1.03 1.01 1.02 1.01 1.01 1.01 1.02
5516 2.384 1.09 1.08 1.05 1.04 1.04 1.02 1.01 1.01 1.01 1.01
5892 2.240 0.98 0.99 1.00 1.01 1.01 1.01 1.00 1.00 1.00 1.00
6269 2.119 1.14 1.04 1.02 1.00 1.00 1.00 1.01 1.02 1.02 1.01
6585 2.016 1.17 1.02 1.01 1.02 1.02 1.03 1.02 1.02 1.02 1.02
6890 1.926 1.35 1.07 1.00 0.99 1.00 1.01 1.01 1.00 1.00 1.01
7224 1.848 1.52 1.11 1.01 0.97 0.96 0.98 0.98 0.98 0.98 0.99
7498 1.778 1.80 1.22 1.03 0.97 0.95 0.94 0.95 0.95 0.95 0.96
7749 1.716 2.01 1.28 1.07 0.99 0.94 0.92 0.92 0.92 0.93 0.93
8026 1.660 2.31 1.41 1.13 1.01 0.95 0.92 0.90 0.89 0.89 0.89
8307 1.609 2.62 1.54 1.19 1.04 0.95 0.90 0.88 0.87 0.86 0.87
8589 1.562 2.94 1.69 1.29 1.10 1.00 0.93 0.89 0.86 0.85 0.85
8575 1.520 3.14 1.78 1.34 1.13 1.01 0.93 0.88 0.85 0.83 0.83
111610 overall 1.73 1.24 1.09 1.03 0.99 0.97 0.96 0.96 0.96 0.96
List of 33 reflections *NOT* obeying Wilson distribution (Z> 10.0)
h k l RES Z Intensity Sigma
72 11 61 1.52 17.34 0.2886E+06 0.2367E+05 "alien"
67 53 6 1.50 15.85 0.2638E+06 0.1128E+06 "alien"
35 10 25 3.17 14.39 0.2118E+08 0.2364E+06 "alien"
46 17 99 1.50 14.16 0.2357E+06 0.9588E+05 "alien"
34 32 2 2.75 13.44 0.1239E+08 0.1279E+06 "alien"
79 6 15 1.60 13.10 0.3117E+06 0.2477E+05 "alien"
61 20 33 1.88 12.54 0.8900E+06 0.3054E+05 "alien"
44 4 48 2.30 12.38 0.4695E+07 0.6072E+05 "alien"
66 25 19 1.79 11.89 0.5788E+06 0.2739E+05 "alien"
66 25 11 1.81 11.88 0.5781E+06 0.2771E+05 "alien"
60 43 61 1.50 11.77 0.1959E+06 0.9769E+05 "alien"
72 11 17 1.74 11.64 0.4278E+06 0.2619E+05 "alien"
80 24 26 1.50 11.41 0.1899E+06 0.9793E+05 "alien"
41 21 26 2.59 11.09 0.6988E+07 0.7945E+05 "alien"
44 18 20 2.59 11.08 0.6982E+07 0.7839E+05 "alien"
23 3 62 2.59 11.06 0.6971E+07 0.9154E+05 "alien"
69 7 22 1.80 11.06 0.5383E+06 0.2564E+05 "alien"
73 10 15 1.72 10.98 0.4036E+06 0.2356E+05 "alien"
70 17 35 1.68 10.96 0.3286E+06 0.2415E+05 "alien"
57 24 41 1.88 10.91 0.7746E+06 0.2842E+05 "alien"
82 24 6 1.50 10.74 0.1787E+06 0.1019E+06 "alien"
69 25 62 1.50 10.67 0.1775E+06 0.8689E+05 "alien"
24 20 44 2.91 10.45 0.9641E+07 0.1017E+06 "alien"
66 43 5 1.63 10.37 0.2468E+06 0.2294E+05 "alien"
81 4 29 1.53 10.36 0.1725E+06 0.2364E+05 "alien"
60 40 26 1.72 10.32 0.3792E+06 0.2578E+05 "alien"
39 18 57 2.18 10.24 0.3885E+07 0.5573E+05 "alien"
70 41 15 1.57 10.19 0.1922E+06 0.2281E+05 "alien"
55 36 41 1.79 10.16 0.4942E+06 0.2967E+05 "alien"
37 4 81 1.88 10.15 0.7202E+06 0.3357E+05 "alien"
56 27 5 2.06 10.14 0.1854E+07 0.3569E+05 "alien"
44 39 29 2.06 10.09 0.1844E+07 0.3805E+05 "alien"
65 46 29 1.56 10.06 0.1898E+06 0.2270E+05 "alien"
List of 33 reflections *NOT* obeying Wilson distribution (sorted by resolution)
Ice rings could occur at (Angstrom):
3.897,3.669,3.441, 2.671,2.249,2.072, 1.948,1.918,1.883,1.721
h k l RES Z Intensity Sigma
82 24 6 1.50 10.74 0.1787E+06 0.1019E+06
67 53 6 1.50 15.85 0.2638E+06 0.1128E+06
80 24 26 1.50 11.41 0.1899E+06 0.9793E+05
60 43 61 1.50 11.77 0.1959E+06 0.9769E+05
69 25 62 1.50 10.67 0.1775E+06 0.8689E+05
46 17 99 1.50 14.16 0.2357E+06 0.9588E+05
72 11 61 1.52 17.34 0.2886E+06 0.2367E+05
81 4 29 1.53 10.36 0.1725E+06 0.2364E+05
65 46 29 1.56 10.06 0.1898E+06 0.2270E+05
70 41 15 1.57 10.19 0.1922E+06 0.2281E+05
79 6 15 1.60 13.10 0.3117E+06 0.2477E+05
66 43 5 1.63 10.37 0.2468E+06 0.2294E+05
70 17 35 1.68 10.96 0.3286E+06 0.2415E+05
73 10 15 1.72 10.98 0.4036E+06 0.2356E+05
60 40 26 1.72 10.32 0.3792E+06 0.2578E+05
72 11 17 1.74 11.64 0.4278E+06 0.2619E+05
66 25 19 1.79 11.89 0.5788E+06 0.2739E+05
55 36 41 1.79 10.16 0.4942E+06 0.2967E+05
69 7 22 1.80 11.06 0.5383E+06 0.2564E+05
66 25 11 1.81 11.88 0.5781E+06 0.2771E+05
61 20 33 1.88 12.54 0.8900E+06 0.3054E+05
57 24 41 1.88 10.91 0.7746E+06 0.2842E+05
37 4 81 1.88 10.15 0.7202E+06 0.3357E+05
56 27 5 2.06 10.14 0.1854E+07 0.3569E+05
44 39 29 2.06 10.09 0.1844E+07 0.3805E+05
39 18 57 2.18 10.24 0.3885E+07 0.5573E+05
44 4 48 2.30 12.38 0.4695E+07 0.6072E+05
44 18 20 2.59 11.08 0.6982E+07 0.7839E+05
41 21 26 2.59 11.09 0.6988E+07 0.7945E+05
23 3 62 2.59 11.06 0.6971E+07 0.9154E+05
34 32 2 2.75 13.44 0.1239E+08 0.1279E+06
24 20 44 2.91 10.45 0.9641E+07 0.1017E+06
35 10 25 3.17 14.39 0.2118E+08 0.2364E+06
cpu time used by XSCALE 25.9 sec
elapsed wall-clock time 28.1 sec
I would like to extract the second last line where the 11th column has a number followed by an asterisk (xy.z*) and the lines above and below that. That is from the table with SUBSET OF INTENSITY DATA WITH SIGNAL/NOISE >= -3.0 AS FUNCTION OF RESOLUTION Above it.
For example in this table the line I'm looking for would contain "23.2*" from the 11th column (CC(1/2)). I would like the second last with an asterisk because the last would be the line that starts with total, and this was a lot easier to extract with a simple grep command.
So the expected output for the code in this case would be to print the lines:
1.60 209398 16298 16304 100.0% 206.1% 208.5% 209397 1.35 214.6% 48.9* -2 0.693 15466
1.50 273432 20770 20893 99.4% 333.4% 342.1% 273340 0.80 346.9% 23.2* -1 0.644 19495
1.40 33 27 27248 0.1% 42.6% 112.7% 12 0.40 60.3% 88.2 0 0.000 0
And so on for all the different possible positions of the asterisk in the table.
In my previous question I recieved the answer
sed -n '/LIMIT/,/=/{/^\s*\(\S*\s*\)\{10\}[0-9.-]*\*/H;x;s/^.*\n\(.*\n.*\)$/\1/;x;/=/{x;P;q}}' file
Which worked really well (thanks Endoro) for extracting just the second last line in the 11th column with the asterisk, (which is what i asked for) but now I just need that editing slightly, or if you would rather make a whole new line, to include the lines above and below.
Here is a link to the previous question Extracting the second last line from a table using a specific number followed by an asterisk (e.g. xy.z*)
Any help would be greatly appreciated.
Sam
Code for GNU sed
sed -rn '/LIMIT/,/total/{//!H};/total/{x;s/^.*\n(.*\n)((\s+\S+){10}\s+[0-9.]+\*(\s+\S+){3}\n(\s+\S+){14}).*/\1\2/;p;q}' file
$sed -rn '/LIMIT/,/total/{//!H};/total/{x;s/^.*\n(.*\n)((\s+\S+){10}\s+[0-9.]+\*(\s+\S+){3}\n(\s+\S+){14}).*/\1\2/;p;q}' file
1.60 209398 16298 16304 100.0% 206.1% 208.5% 209397 1.35 214.6% 48.9* -2 0.693 15466
1.50 273432 20770 20893 99.4% 333.4% 342.1% 273340 0.80 346.9% 23.2* -1 0.644 19495
1.40 33 27 27248 0.1% 42.6% 112.7% 12 0.40 60.3% 88.2 0 0.000 0
A bit dirty but should work:
awk '
/^ *SUBSET OF INTENSITY/,/^ *total/ {
a[++i]=$0;
b[i]=$11
}
END {
for(o=i-1;o>=0;o--)
if (b[o]~/\*/) {
print a[o-1]"\n"a[o]"\n"a[o+1]
break
}
}' log