String Replacing using Sed or Awk - regex

Cat vols.txt (This list can fluctuate depending on the vols a system might have)
$Vol01
$Vol02
$Vol03
$Vol04
$Vol04
$Vol05
$Vol06
$Vol07
$Vol08
if I do:
cat datavols.log | sed -n 's/^\$.*/vanish \\&\.acd\*\.\* -db ! \&/gp' >> workfile.txt
I get:
cat workfile.txt
vanish $Vol01.acd*.* -db ! &
vanish $Vol02.acd*.* -db ! &
vanish $Vol03.acd*.* -db ! &
vanish $Vol04.acd*.* -db ! &
vanish $Vol05.acd*.* -db ! &
vanish $Vol06.acd*.* -db ! &
vanish $Vol07.acd*.* -db ! &
vanish $Vol08.acd*.* -db ! &
I have got 4 CPUs and I want to distribute the work between them and the output should be something like:
cat workfile.txt
Run –cpu=0 vanish $Vol01.acd*.* -db ! &
Run –cpu=1 vanish $Vol02.acd*.* -db ! &
Run –cpu=2 vanish $Vol03.acd*.* -db ! &
Run –cpu=3 vanish $Vol04.acd*.* -db ! &
Run –cpu=0 vanish $Vol05.acd*.* -db ! &
Run –cpu=1 vanish $Vol06.acd*.* -db ! &
Run –cpu=2 vanish $Vol07.acd*.* -db ! &
Run –cpu=3 vanish $Vol08.acd*.* -db ! &
Kindly help me getting this. I am not sure how to have a variable iterate between 0-3 inside SED. Thanks in advance.

awk 'BEGIN{i=0}{print "Run –cpu="i " vanish "$1".acd*.* -db ! &"; i=(i+1)%4}' inputfile
will produce the output as
Run –cpu=0 vanish $Vol01.acd*.* -db ! &
Run –cpu=1 vanish $Vol02.acd*.* -db ! &
Run –cpu=2 vanish $Vol03.acd*.* -db ! &
Run –cpu=3 vanish $Vol04.acd*.* -db ! &
Run –cpu=0 vanish $Vol05.acd*.* -db ! &
Run –cpu=1 vanish $Vol06.acd*.* -db ! &
Run –cpu=2 vanish $Vol07.acd*.* -db ! &
Run –cpu=3 vanish $Vol08.acd*.* -db ! &

You can use FNR, the current line number of the file, in awk:
awk '{print "Run -cpu=" (FNR-1)%4 " vanish " $0 ".acd*.* -db | &"}' vols.txt

You can also use this simple bash script .
Total_Cpu=4
Count=0
while read line ; do
if [ $Count -eq 4 ] ;then
Count=0
fi
echo $line | sed "s/^\$.*/Run –cpu=$Count vanish \&.acd*.* -db ! \&/g" >> Output_file
Count=`expr $Count + 1`
done < Input_File

This might work for you (GNU sed):
sed -rn '1{x;s/^/0123/;x};G;s/(.*)\n(.)(.*)/Run -cpu=\2 vanish \1.acd*.* -db ! \&\n\3\2/;P;s/.*\n//;h file
Create a line with the sequence of the cpu's you want and store it in the hold space (HS).
Append the HS to the current line and using the substitute command insert the cpu number and the required strings prepping the cpu order for the next line.
Print the string then replace the amended cpu order in the HS.

sed '1 {
x;s/^/Run –cpu=3 vanish :-D.acd*.* -db ! \&/;x
}
x;y/0123/1230/;G;s/\(.*sh \)[^.]\{1,\}\(.*\)\n\(.*\)/\1\3\2/;h' YourFile
a bit like potong bt using the last line as previous reference only changing the $vol content and cpu value

Related

rrdtool update expected 2 data sources

I wrote a simple rrdtool database to graph Wi-Fi signal strength and modulation. The signal strength works, but when I try to update the db with MCS information, I get:
ERROR: ./somefile.rrd: expected 2 data source readings (got 1) from mcsul15
Here's my update code:
rssi=`snmpget -v 2c -c communityname 1.2.3.4 .1.3.6.1.4.1.17713.21.1.2.3.0 | awk -v x=4 '{print $x}' | tr -d -`
noisefloor=`snmpget -v 2c -c communityname 1.2.3.4 .1.3.6.1.4.1.17713.21.1.2.20.1.9.1 | awk -v x=4 '{print $x}' | tr -d -`
ulmcs14=`snmpget -v 2c -c communityname 1.2.3.4 CAMBIUM-PMP80211-MIB::ulWLanMCS14Packets.0 | awk -v x=4 '{print $x}'`
ulmcs15=`snmpget -v 2c -c communityname 1.2.3.4 CAMBIUM-PMP80211-MIB::ulWLanMCS15Packets.0 | awk -v x=4 '{print $x}'`
echo $rssi
echo $noisefloor
echo $ulmcs14
echo $ulmcs15
rrdtool update ./somefile.rrd --template \
rssi:noisefloor N:$rssi:$noisefloor \
mcsul15:mcsul14 N:$ulmcs15:$ulmcs14
Which gives me:
68
94
143679
17602658
ERROR: ./somefile.rrd: expected 2 data source readings (got 1) from mcsul15
What am I missing?
Assuming that somefile.rrd has 4 DS defined in it with those 4 names, you should give all four together when updating. You can only specify one template for the update, and the other parameters should be in that format.
Also, check the names of your DS are correct as your variable is called $ulmcs15 but the DS is being named mcsul15.
rrdtool update ./somefile.rrd --template \
rssi:noisefloor:mcsul15:mcsul14 \
N:$rssi:$noisefloor:$ulmcs15:$ulmcs14
The error message is because in your original commandline, mcsul15:mcsul14 is being taken as an update vector, not a template. Thus it is one timestamp and one value, where two were expected. It would have been a better error message to say something like "timestamp not recognised in 'mcsul15'" but that's a different issue...

How to fix "Segmentation fault" in fortran program

I wrote this program that reads daily gridded climate model data (6 variables) from a file and uses it in further calculations. When running the pgm for a relatively short period (e.g. 5 years) it works fine, but when I want to run it for the required 30 year period I get a "Segmentation fault".
System description: Lenovo Thinkpad with Core i7 vPro with Windows 10 Pro
Program run in Fedora (64-bit) inside Oracle VM VirtualBox
After commenting out everything and checking section-by-section I found that:
everything works fine for 30 years as long as it reads 4 variables only
as soon as the 5th or 6th variable is added, the problem creeps in
alternatively, I can run it with all 6 variables but then it only works for a shorter analysis period (e.g. 22 years)
So the problem might lie with:
the statement: recl=AX*AY*4 which I borrowed from another pgm, yet changing the 4 doesn't fix it
the system I'm running the pgm on
I have tried the "ulimit -s unlimited" command suggested elsewhere, but only get the response "cannot modify limit: Operation not permitted".
File = par_query.h
integer AX,AY,startyr,endyr,AT
character pperiod*9,GCM*4
parameter(AX=162,AY=162) ! dim of GCM array
parameter(startyr=1961,endyr=1990,AT=endyr-startyr+1,
& pperiod="1961_1990")
parameter(GCM='ukmo')
File = query.f
program query
!# A FORTRAN program that reads global climate model (GCM) data to
!# be used in further calculations
!# uses parameter file: par_query.h
!# compile as: gfortran -c -mcmodel=large query.f
!# gfortran query.o
!# then run: ./a.out
! Declarations ***************************************************
implicit none
include 'par_query.h' ! parameter file
integer :: i,j,k,m,n,nn,leapa,leapb,leapc,leapn,rec1,rec2,rec3,
& rec4,rec5,rec6
integer, dimension(12) :: mdays
real :: ydays,nyears
real, dimension(AX,AY,31,12,AT) :: tmax_d,tmin_d,rain_d,rhmax_d,
& rhmin_d,u10_d
character :: ipath*43,fname1*5,fname2*3,nname*14,yyear*4,mmonth*2,
& ext1*4
! Data statements and defining characters ************************
data mdays/31,28,31,30,31,30,31,31,30,31,30,31/ ! Days in month
ydays=365. ! Days in year
nyears=real(AT) ! Analysis period (in years)
ipath="/run/media/stephan/SS_Elements/CCAM_africa/" ! Path to
! input data directory
fname1="ccam_" ! Folder where data is located #1
fname2="_b/" ! Folder where data is located #2
nname="ccam_africa_b." ! Input filename (generic part)
ext1=".dat"
leapa=0
leapb=0
leapc=0
leapn=0
! Read daily data from GCM ***************************************
do n=startyr,endyr ! Start looping through years --------------
write(yyear,'(i4.4)')n
nn=n-startyr+1
! Test for leap years
leapa=mod(n,4)
leapb=mod(n,100)
leapc=mod(n,400)
if (leapa==0) then
if (leapb==0) then
if (leapc==0) then
leapn=1
else
leapn=0
endif
else
leapn=1
endif
else
leapn=0
endif
if (leapn==1) then
mdays(2)=29
ydays=366.
else
mdays(2)=28
ydays=365.
endif
do m=1,12 ! Start looping through months --------------------
write(mmonth,'(i2.2)')m
! Reading daily data from file
print*,"Reading data for ",n,mmonth
open(101,file=ipath//fname1//GCM//fname2//nname//GCM//"."//
& yyear//mmonth//ext1,access='direct',recl=AX*AY*4)
do k=1,mdays(m) ! Start looping through days --------------
rec1=(k-1)*6+1
rec2=(k-1)*6+2
rec3=(k-1)*6+3
rec4=(k-1)*6+4
rec5=(k-1)*6+5
rec6=(k-1)*6+6
read(101,rec=rec1)((tmax_d(i,j,k,m,nn),i=1,AX),j=1,AY)
read(101,rec=rec2)((tmin_d(i,j,k,m,nn),i=1,AX),j=1,AY)
read(101,rec=rec3)((rain_d(i,j,k,m,nn),i=1,AX),j=1,AY)
read(101,rec=rec4)((rhmax_d(i,j,k,m,nn),i=1,AX),j=1,AY)
read(101,rec=rec5)((rhmin_d(i,j,k,m,nn),i=1,AX),j=1,AY)
read(101,rec=rec6)((u10_d(i,j,k,m,nn),i=1,AX),j=1,AY)
enddo ! k-loop (days) ends --------------------------------
close(101)
enddo ! m-loop (months) ends --------------------------------
enddo ! n-loop (years) ends -----------------------------------
end program query

Mixing cpuset.cpus and cpuset.mems cgroups with cpu.shares with memory.limit_in_bytes

I have some confusion over how cgroups work. Here's my understanding of these cgroup limits...
cpuset.cpus binds to a specific core
cpuset.mems binds to a specific NUMA node
cpu.shares tells the scheduler to give a certain percentage of CPU processing power
memory.limit_in_bytes limits the amount of memory available to the process
So what happens when you bind a process to a specific cpuset, cpu, and memory? Some examples...
If the NUMA nodes I bind to total 8GB but I set my memory limit to 12GB, what happens?
If I bind to cores 0 and 1 but set the cpu shares to 2 out of 1024, what happens?
Also, how do I know the details/specs of the cores/NUMA nodes that I'm referencing in cpuset?
have the same confusion.
But based on my test, if cpuset.cpu_exclusive=0(cores are shared), both cpu.shares and cpuset.cpus work. Test is as following.
create two cgroups: cputest1 and cputest2(same hierarchy under root)
for both cputest1 and cputest2, do:
echo 0,2 > cpuset.cpus
echo 0 > cpuset.mems
new terminal a
cd /sys/fs/cgroup/cpuset/cputest1
echo $$ >> cgroup.proc
while :; do echo test > /dev/null; done
new terminal b
cd /sys/fs/cgroup/cpuset/cputest1
echo $$ >> cgroup.proc
while :; do echo test > /dev/null; done
new terminal c
cd /sys/fs/cgroup/cpuset/cputest2
echo $$ >> cgroup.proc
while :; do echo test > /dev/null; done
run top, there are three bash tasks, and cpu usage of each task is almost same about(60%-70%), total is 200%
4.
terminal a
echo $$ >> /sys/fs/cgroup/cpu/cputest1/cgroup.proc
while :; do echo test > /dev/null; done
terminal b
echo $$ >> /sys/fs/cgroup/cpu/cputest1/cgroup.proc
while :; do echo test > /dev/null; done
terminal c
echo $$ >> /sys/fs/cgroup/cpu/cputest2/cgroup.proc
while :; do echo test > /dev/null; done
echo 2/2048/1024 > /sys/fs/cgroup/cpu/cputest1/cpu.shares
echo 2048/2/1024 > /sys/fs/cgroup/cpu/cputest2/cpu.shares
run top to watch the cpu usage among three bash task

List workspaces of a user on a specific machine in Perforce

How can I get all Perfoce workspaces of a specific user on a specific machine?
This command let me all workspaces of a specific user on all machines:
P4 clients -u username
Here's a cmd one-liner that does more or less the same thing as pitseeker's:
for /f "tokens=2" %x in ('p4 clients -u username') do #(echo %x & p4 client -o %x | findstr /r /c:"^Host:")
A somewhat more robust batch file that seems to fit what you're looking for is:
#echo off
set USER=%1
set HOST=%2
REM Don't forget to double your for-loop percents in batch files,
REM unlike the one-liner above...
for /f "tokens=2" %%x in ('p4 clients -u %USER%') do call :CheckClient %%x
goto :EOF
:CheckClient
p4 client -o %1 | findstr /r /c:"^Host:" | findstr /i /r /c:"%HOST%$">nul && echo %1
goto :EOF
Save that and run it with the username as the first parameter and the desired host name as the second. That is, something like showclient elady elady_pc
Not exactly what you're asking for, but it's easy and perhaps sufficient:
p4 clients -u username | cut -f2 -d' ' | xargs -n 1 p4 client -o |egrep -e '^Client|^Host'
This lists all your clients and their host-restrictions (if any).
In the resulting list you can find the specific machines very easily.

Print remaining lines in file after regular expression that includes variable

I have the following data:
====> START LOG for Background Process: HRBkg Hello on 2013/09/27 23:20:20 Log Level 3 09/27 23:20:20 I Background process is using
processing model #: 3 09/27 23:20:23 I 09/27 23:20:23 I --
Started Import for External Key
====> START LOG for Background Process: HRBkg Hello on 2013/09/30 07:31:07 Log Level 3 09/30 07:31:07 I Background process is using
processing model #: 3 09/30 07:31:09 I 09/30 07:31:09 I --
Started Import for External Key
I need to extract the remaining file contents after the LAST match of ====> START LOG.....
I have tried numerous times to use sed/awk, however, I can not seem to get awk to utilize a variable in my regular expression. The variable I was trying to include was for the date (2013/09/30) since that is what makes the line unique.
I am on an HP-UX machine and can not use grep -A.
Any advice?
There's no need to test for a specific time just to find the last entry in the file:
awk '
BEGIN { ARGV[ARGC] = ARGV[ARGC-1]; ARGC++ }
NR == FNR { if (/START LOG/) lastMatch=NR; next }
FNR == lastMatch { found=1 }
found
' file
This might work for you (GNU sed):
a=2013/09/30
sed '\|START LOG.*'"$a"'|{h;d};H;$!d;x' file
This will return your desired output.
sed -n '/START LOG/h;/START LOG/!H;$!b;x;p' file
If you have tac available, you could easily do..
tac <file> | sed '/START LOG/q' | tac
Here is one in Python:
#!/usr/bin/python
import sys, re
for fn in sys.argv[1:]:
with open(fn) as f:
m=re.search(r'.*(^====> START LOG.*)',f.read(), re.S | re.M)
if m:
print m.group(1)
Then run:
$ ./re.py /tmp/log.txt
====> START LOG for Background Process: HRBkg Hello on 2013/09/30 07:31:07 Log Level 3
09/30 07:31:07 I Background process is using processing model #: 3
09/30 07:31:09 I
09/30 07:31:09 I -- Started Import for External Key
If you want to exclude the ====> START LOGS.. bit, change the regex to:
r'.*(?:^====> START LOG.*?$\n)(.*)'
For the record, you can easily match a variable against a regular expression in Awk, or vice versa.
awk -v date='2013/09/30' '$0 ~ date {p=1} p' file
This sets p to 1 if the input line matches the date, and prints if p is non-zero.
(Recall that the general form in Awk is condition { actions } where the block of actions is optional; if omitted, the default action is to print the current input line.)
This prints the last START LOG, it set a flag for the last block and print it.
awk 'FNR==NR { if ($0~/^====> START LOG/) f=NR;next} FNR>=f' file file
You can use a variable, but if you have another file with another date, you need to know the date in advance.
var="2013/09/30"
awk '$0~v && /^====> START LOG/ {f=1}f' v="$var" file
====> START LOG for Background Process: HRBkg Hello on 2013/09/30 07:31:07 Log Level 3
09/30 07:31:07 I Background process is using processing model #: 3
09/30 07:31:09 I
09/30 07:31:09 I -- Started Import for External Key
With GNU awk (gawk) or Mikes awk (mawk) you can set the record separator (RS) so that each record will contain a whole log message. So all you need to do is print the last one in the END block:
awk 'END { printf "%s", RS $0 }' RS='====> START LOG' infile
Output:
====> START LOG for Background Process: HRBkg Hello on 2013/09/30 07:31:07 Log Level 3
09/30 07:31:07 I Background process is using processing model #: 3
09/30 07:31:09 I
09/30 07:31:09 I -- Started Import for External Key
Answer in perl:
If your logs are in assume filelog.txt.
my #line;
open (LOG, "<filelog.txt") or "die could not open filelog.tx";
while(<LOG>) {
#line = $_;
}
my $lengthline = $#line;
my #newarray;
my $j=0;
for(my $i= $lengthline ; $i >= 0 ; $i++) {
#newarray[$j] = $line[$i];
if($line[$i] =~ m/^====> START LOG.*/) {
last;
}
$j++;
}
print "#newarray \n";