extract text between the two blocks using regex - regex

I am trying to extract the text between the two strings using the following regex.
(?s)Non-terminated Pods:.*?in total.\R(.*)(?=Allocated resources)
This regex looks fine in regex101 but somehow does not print the pod details when used with perl or grep -P. Below command results in empty output.
kubectl describe node |perl -le '/(?s)Non-terminated Pods:.*?in total.\R(.*)(?=Allocated resources)/m; printf "$1"'
Here is the sample input:
PodCIDRs: 10.233.65.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default foo 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105s
kube-system nginx-proxy-kube-worker-1 25m (1%) 0 (0%) 32M (1%) 0 (0%) 9m8s
kube-system nodelocaldns-xbjp8 100m (5%) 0 (0%) 70Mi (4%) 170Mi (10%) 7m4s
Allocated resources:
Question:
how to extract the info from the above output, to look like below. What is wrong in the regex or the command that I am using?
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default foo 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105s
kube-system nginx-proxy-kube-worker-1 25m (1%) 0 (0%) 32M (1%) 0 (0%) 9m8s
kube-system nodelocaldns-xbjp8 100m (5%) 0 (0%) 70Mi (4%)
Question-2: What if I have two blocks of similar inputs. How to extract the pod details ?
Eg:
if the input is:
PodCIDRs: 10.233.65.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default foo 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105s
kube-system nginx-proxy-kube-worker-1 25m (1%) 0 (0%) 32M (1%) 0 (0%) 9m8s
kube-system nodelocaldns-xbjp8 100m (5%) 0 (0%) 70Mi (4%) 170Mi (10%) 7m4s
Allocated resources:
....some
.......random data...
PodCIDRs: 10.233.65.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default foo-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105s
kube-system nginx-proxy-kube-worker-2 25m (1%) 0 (0%) 32M (1%) 0 (0%) 9m8s
kube-system nodelocaldns-xbjp3-2 100m (5%) 0 (0%) 70Mi (4%) 170Mi (10%) 7m4s
Allocated resources:

With some obvious assumptions, and keeping it close to the pattern in the question:
perl -0777 -wnE'
#pods = /Non-terminated\s+Pods:\s+\([0-9]+\s+in\s+total\)\n(.*?)\nAllocated resources:/gs;
say for #pods
' input-file
(note modifiers on the regex in this line, which is too wide to fit on screen: /gs)
The regex from the question works when used instead of the one in this answer (and with no /s modifier, as it should) on a single block of text. To work with multiple blocks the (.*) in it need be changed to (.*?), so that it doesn't match all the way to the last Allocated...
The question doesn't say how precisely is that regex "used with perl"; I can't say what failed.
Comments on the command-line program above:
The -0777 switch makes it read the file whole into a string, available in the program in the variable $_, to which the regex is bound by default
There is also the switch -g, an alias for -0777, available starting with 5.36.0
We still need the -n switch so that the program iterates over the "lines" of input (STDIN or a file). In this case the input record separator is undefined so it's all just one "line"
The regex captures are returned since the match operator is in the list context, being assigned to the array #pods

Using gnu-grep you can use your regex with some tweaks:
kubectl describe node |
grep -zoP '(?s)Non-terminated Pods:.*?in total.\R\K(.*?)(?=Allocated resources)'
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default foo 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105s
kube-system nginx-proxy-kube-worker-1 25m (1%) 0 (0%) 32M (1%) 0 (0%) 9m8s
kube-system nodelocaldns-xbjp8 100m (5%) 0 (0%) 70Mi (4%) 170Mi (10%) 7m4s
Used \K (match reset) after \R to remove that line from output
Used -z option to treat treat input and output data as sequences of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline.
PS: Same regex will work with second input block as well with header line shown before each block.
Alternatively you can use any version sed for this job as well:
kubectl describe node |
sed -n '/Non-terminated Pods:.*in total.*/,/Allocated resources:/ {//!p;}'
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default foo 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105s
kube-system nginx-proxy-kube-worker-1 25m (1%) 0 (0%) 32M (1%) 0 (0%) 9m8s
kube-system nodelocaldns-xbjp8 100m (5%) 0 (0%) 70Mi (4%) 170Mi (10%) 7m4s

With your shown samples, please try following GNU awk code. Written and tested in GNU awk. Simple explanation would be, setting RS as Non-terminated Pods:.*Allocated resources: for Input_file. Then in main program checking if RT is NOT NULL then using gsub function of awk to substitute (^|\n)Non-terminated Pods:[^\n]*\n OR \nAllocated resources:\n* with NULL in RT variable and then printing its value which will provide output as per shown samples.
awk -v RS='Non-terminated Pods:.*Allocated resources:' '
RT{
gsub(/(^|\n)Non-terminated Pods:[^\n]*\n|\nAllocated resources:\n*/,"",RT)
print RT
}
' Input_file

A possible solution could be as following for a very big files to read line by line.
Select range of lines of interest and remove the last one which is not included into desired output.
use strict;
use warnings;
while(<>) {
if( /^ Namespace/ .. /^Allocated resources:/ ) {
print unless /^Allocated resources:/;
}
}
exit 0;
Output
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default foo 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105s
kube-system nginx-proxy-kube-worker-1 25m (1%) 0 (0%) 32M (1%) 0 (0%) 9m8s
kube-system nodelocaldns-xbjp8 100m (5%) 0 (0%) 70Mi (4%) 170Mi (10%) 7m4s
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default foo-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105s
kube-system nginx-proxy-kube-worker-2 25m (1%) 0 (0%) 32M (1%) 0 (0%) 9m8s
kube-system nodelocaldns-xbjp3-2 100m (5%) 0 (0%) 70Mi (4%) 170Mi (10%) 7m4s

Related

Multiple processes have the same connection

When I start passenger, multiple processes have the same connection.
bundle exec passenger-status
Requests in queue: 0
* PID: 13830 Sessions: 0 Processed: 107 Uptime: 1h 24m 22s
CPU: 0% Memory : 446M Last used: 41s ago
* PID: 13909 Sessions: 0 Processed: 0 Uptime: 41s
CPU: 0% Memory : 22M Last used: 41s ago
ss -antp4 | grep ':3306 '
ESTAB 0 0 XXX.XXX.XXX.XXX:55488 XXX.XXX.XXX.XXX:3306 users:(("ruby",pid=13909,fd=14),("ruby",pid=13830,fd=14),("ruby",pid=4672,fd=14)) #<= 4672 is preloader process?
ESTAB 0 0 XXX.XXX.XXX.XXX:55550 XXX.XXX.XXX.XXX:3306 users:(("ruby",pid=13830,fd=24))
ESTAB 0 0 XXX.XXX.XXX.XXX:55552 XXX.XXX.XXX.XXX:3306 users:(("ruby",pid=13909,fd=24))
Is the connection using port 55488 correct?
I believe that inconsistencies occur when multiple processes refer to the same connection. But I can't find the problem in my application.
I am using Rails 4.x and passenger 6.0.2

Prometheus: the "for" is breaking my test

I've this alert which I try to cover by unit tests:
- alert: Alert name
annotations:
summary: 'Summary.'
book: "https://link.com"
expr: sum(increase(app_receiver{app="app_name", exception="exception"}[1m])) > 0
for: 5m
labels:
severity: 'critical'
team: 'myteam'
This test scenario is failing each time until the for: 5m will be commented in the code. In this case, it'll be successful.
rule_files:
- ./../../rules/test.yml
evaluation_interval: 1m
tests:
- interval: 1m
input_series:
- series: 'app_receiver{app="app_name", exception="exception"}'
values: '0 0 0 0 0 0 0 0 0 0'
- series: 'app_receiver{app="app_name", exception="exception"}'
values: '0 0 0 0 0 10 20 40 60 80'
alert_rule_test:
- alertname: Alert name
eval_time: 5m
exp_alerts:
- exp_labels:
severity: 'critical'
team: 'myteam'
exp_annotations:
summary: 'Summary.'
book: "https://link.com"
The result of this test:
FAILED:
alertname:Alert name, time:5m,
exp:"[Labels:{alertname=\"Alert name\", severity=\"critical\", team=\"myteam\"} Annotations:{book=\"https://link.com\", summary=\"Summary.\"}]",
got:"[]
Can someone please help me fix this test and explain a failure reason?

What do series values stand for in Prometheus unit test?

I am trying to understand what series values stand for in Prometheus unit test.
The official doc does not provide any info.
For example, fire an alert if any instance is down over 10 seconds.
alerting-rules.yml
groups:
- name: alert_rules
rules:
- alert: InstanceDown
expr: up == 0
for: 10s
labels:
severity: critical
annotations:
summary: "Instance {{ $labels.instance }} down"
description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 10 seconds."
alerting-rules.test.yml
rule_files:
- alerting-rules.yml
evaluation_interval: 1m
tests:
- interval: 1m
input_series:
- series: 'up{job="prometheus", instance="localhost:9090"}'
values: '0 0 0 0 0 0 0 0 0 0 0 0 0 0 0'
alert_rule_test:
- eval_time: 10m
alertname: InstanceDown
exp_alerts:
- exp_labels:
severity: critical
instance: localhost:9090
job: prometheus
exp_annotations:
summary: "Instance localhost:9090 down"
description: "localhost:9090 of job prometheus has been down for more than 10 seconds."
Originally, I thought because of interval: 1m, which is 60 seconds, and there are 15 numbers, 60 / 15 = 4s, so each value stands for 4 seconds (1 means up, 0 means down).
However, when the values are
values: '0 0 0 0 0 0 0 0 0 0 0 0 0 0 0'
or
values: '1 1 1 1 1 1 1 1 1 0 0 0 0 0 0'
Both will pass the test when I run promtool test rules alerting-rules.test.yml.
But below will fail:
values: '1 1 1 1 1 1 1 1 1 1 0 0 0 0 0'
So my original thought each number stands for 4s is wrong. If my assumption is correct, then only when less three 0s will fail the test.
What do series values stand for in Prometheus unit test?
Your assumption is incorrrect. The number in the values doesn't correspond at the number of value in the interval but which value the series will have after each interval. For example:
values: 1 1 1 1 1 1
# 1m 2m 3m 4m 5m 6m
In your example, since you evaluate the value at 10min (with eval_time) the evaluation will be based on the tenth value in the values. Since you check if up==0, when you change the tenth value to 1 it will fail because the alert will not be trigger as excepted.

error when resizing partition using `growpart` on AWS EBS instance

I have an EC2 instance where I'm attempting to resize the disk on the fly. I've followed the instructions in this SO post but when I run sudo growpart /dev/nvme0n1p1 1, I get the following error:
FAILED: failed to get start and end for /dev/nvme0n1p11 in /dev/nvme0n1p1
What does this mean and how can I resolve it?
More info:
Output from lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 300G 0 disk
└─nvme0n1p1 259:1 0 300G 0 part /
I can see that EBS volume is in the in-use (optimizing) state.
Thanks in advance!
But for me the solution didn’t work
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 280G 0 disk
├─xvdf1 202:81 0 4G 0 part [SWAP]
├─xvdf2 202:82 0 10G 0 part /data1
├─xvdf3 202:83 0 10G 0 part /data2
├─xvdf4 202:84 0 1K 0 part
├─xvdf5 202:85 0 10G 0 part /applications1
├─xvdf6 202:86 0 4G 0 part /applications2
├─xvdf7 202:87 0 8G 0 part /logsOld
├─xvdf8 202:88 0 50G 0 part /extra
├─xvdf9 202:89 0 20G 0 part /logs
└─xvdf10 202:90 0 64G 0 part /extra/tmp
growpart /dev/xvdf 10
FAILED: failed to get start and end for /dev/xvdf10 in /dev/xvdf
I think the name of the command growpart is a bit misleading, because following the aws instructions you should grow the disk:
sudo growpart /dev/nvme0n1 1
not the partition /dev/nvme0n1p1

sphinx grammarLocation can't locate resource

When building the JSGFDemo using ant, everything works fine. Running the JSGFDemo.jar build artifact works without any errors. However, when using source folder imported in eclipse, and adding the jars in the lib/ directory to the build path, the program errors with the following message:
Problem configuring recognizerProperty exception component:'jsgfGrammar' property:'grammarLocation' - Can't locate resource:/edu/cmu/sphinx/demo/jsapi/jsgf/
edu.cmu.sphinx.util.props.InternalConfigurationException: Can't locate resource:/edu/cmu/sphinx/demo/jsapi/jsgf/
For some reason the call to ConfigurationManagerUtils.class.getResource(resourceName); in ConfigurationManagerUtils.resourceToURL(String location) seemingly returns different results for location = "resource:/edu/cmu/sphinx/demo/jsapi/jsgf/". (null, or a valid URL-object)
As a sidenote, I thought getResource("/path/to/a/dir/not/file/"); was invalid when it would resolve to a path inside a jar.
I've been banging my head against this for a while now and can't see what I'm doing wrong.
I believe to have found the issue. By default, Eclipse seems to construct the jar differently, leaving out entries for directories.
Investigating the archives with unzip -v reveals some interesting details.
File from building with Ant:
Archive: JSGFDemo.jar
Length Method Size Cmpr Date Time CRC-32 Name
-------- ------ ------- ---- ---------- ----- -------- ----
0 Stored 0 0% 2013-03-31 03:13 00000000 META-INF/
284 Defl:N 210 26% 2013-03-31 03:13 ddd976ff META-INF/MANIFEST.MF
0 Stored 0 0% 2013-03-31 03:08 00000000 edu/
0 Stored 0 0% 2013-03-31 03:08 00000000 edu/cmu/
0 Stored 0 0% 2013-03-31 03:13 00000000 edu/cmu/sphinx/
0 Stored 0 0% 2013-03-31 03:12 00000000 edu/cmu/sphinx/demo/
0 Stored 0 0% 2013-03-31 03:13 00000000 edu/cmu/sphinx/demo/jsapi/
0 Stored 0 0% 2013-03-31 03:13 00000000 edu/cmu/sphinx/demo/jsapi/jsgf/
7391 Defl:N 3501 53% 2013-03-31 03:13 938438dd edu/cmu/sphinx/demo/jsapi/jsgf/JSGFDemo.class
798 Defl:N 326 59% 2013-03-31 03:13 647722fc edu/cmu/sphinx/demo/jsapi/jsgf/books.gram
204 Defl:N 140 31% 2013-03-31 03:13 789bb514 edu/cmu/sphinx/demo/jsapi/jsgf/commands.gram
9295 Defl:N 1500 84% 2013-03-31 03:13 3b519044 edu/cmu/sphinx/demo/jsapi/jsgf/jsgf.config.xml
1589 Defl:N 473 70% 2013-03-31 03:13 60075af0 edu/cmu/sphinx/demo/jsapi/jsgf/movies.gram
299 Defl:N 195 35% 2013-03-31 03:13 42e94d32 edu/cmu/sphinx/demo/jsapi/jsgf/music.gram
666 Defl:N 288 57% 2013-03-31 03:13 ca4b72f9 edu/cmu/sphinx/demo/jsapi/jsgf/news.gram
-------- ------- --- -------
20526 6633 68% 15 files
Jar exported using eclipse:
Archive: JSGFDemo-eclipse.jar
Length Method Size Cmpr Date Time CRC-32 Name
-------- ------ ------- ---- ---------- ----- -------- ----
180 Defl:N 134 26% 2013-03-31 23:35 1e681d3b META-INF/MANIFEST.MF
7338 Defl:N 3537 52% 2013-03-31 23:29 ed8c4c3f edu/cmu/sphinx/demo/jsapi/jsgf/JSGFDemo.class
798 Defl:N 326 59% 2013-03-31 13:21 647722fc edu/cmu/sphinx/demo/jsapi/jsgf/books.gram
204 Defl:N 140 31% 2013-03-31 13:21 789bb514 edu/cmu/sphinx/demo/jsapi/jsgf/commands.gram
9295 Defl:N 1500 84% 2013-03-31 13:21 3b519044 edu/cmu/sphinx/demo/jsapi/jsgf/jsgf.config.xml
1589 Defl:N 473 70% 2013-03-31 13:21 60075af0 edu/cmu/sphinx/demo/jsapi/jsgf/movies.gram
299 Defl:N 195 35% 2013-03-31 13:21 42e94d32 edu/cmu/sphinx/demo/jsapi/jsgf/music.gram
666 Defl:N 288 57% 2013-03-31 13:21 ca4b72f9 edu/cmu/sphinx/demo/jsapi/jsgf/news.gram
-------- ------- --- -------
20369 6593 68% 8 files
After a quick google, I found the "Add directory entries" option in Eclipse's jar export wizard.