Perl: file search regex multiple infos in multiple lines - regex

Hello I have this in a file, multiple lines and from them I want to be able to get the User name and the version he's using.
File
<W>2016-06-25 00:27:30.577 1 => <4:(-1)> Client version 1.2.10 (Win: 1.2.10)
<W>2016-06-25 00:27:30.635 1 => <4:[AAA] User1(1850)> Authenticated
<W>2016-06-25 00:27:30.635 1 => <2:(-1)> Client version 1.2.16 (Win: 1.2.16)
<W>2016-06-25 00:27:30.687 1 => <2:[AAA] User2(942)> Authenticated
Outpout wanted
4 : User1 : 1.2.10
2 : User2 : 1.2.16
So the datas for one client is on 2 lines.
The first line get the version number.
The second line the user name.
I noticed that both lines have a match ID, in my example the user1 line match ID is 4: and 2: for the second user.
So I started with something like this, but don't really work as intended and creating a second read to find the second line in the entire file is too much / not optimized.
Perl Script
#!/usr/bin/perl
use strict;
use warnings;
my $file = 'mylogfile.log';
open (my $fl, '<:encoding(UTF-8)', $file)
or die 'File not found';
while (my $row = <$fl>) {
if ($row =~ m/\<(\d+).*\>\sclient\sversion\s(\d+.\d+.\d+)\s/i) {
my $id = $1;
my $vers = $2;
while (my $row1 = <$fl>) {
if ($row1 =~ m/\<$id\:(.+)\(\d+\)\>/i) {
my $name = $1;
print "$id : $name : $vers\n";
}
}
}
}
If any perl guru have an idea, thanks! :-)

I see in your log file that timestamps of corresponding rows are different.
So, I suppose, when two users log in at the same time, log records could get interspersed, for example:
<W>2016-06-25 00:27:30.577 1 => <4:(-1)> Client version 1.2.10 (Win: 1.2.10)
<W>2016-06-25 00:27:30.635 1 => <2:(-1)> Client version 1.2.16 (Win: 1.2.16)
<W>2016-06-25 00:27:30.635 1 => <4:[AAA] User1(1850)> Authenticated
<W>2016-06-25 00:27:30.687 1 => <2:[AAA] User2(942)> Authenticated
If this is the case, I would suggest using a hash to remember ids:
use strict;
use warnings;
my $file = 'mylogfile.log';
open (my $fl, '<:encoding(UTF-8)', $file)
or die 'File not found';
my %ids;
while (my $row = <$fl>) {
if ($row =~ m/\<(\d+).*\>\sclient\sversion\s(\d+.\d+.\d+)\s/i) {
my ($id,$vers)=($1,$2);
$ids{$id}=$vers;
}
elsif ($row =~ m/\<(\d+)\:(.+)\(\d+\)\>.*authenticated/i) {
if (defined $ids{$1}) {
print "$1 : $2 : $ids{$1}\n";
delete $ids{$1};
}
}
}

I don't know much about perl, but can provide some idea:
login= map();
while( row=readrow())
{
if(match(id version))
login[$1]=$2
else
if(match(id username userid ))
{
print "user: ", $2, "version:",login[$1], "userid: $3", "sessionid: ", $1
delete login[$1]
}
}

Running your code gave me the result
4 : [AAA] User1 : 1.2.10
Your second regular expression is capturing the bracketed letters and the user name. This isn't what your desired output looks like.
The second while loop exhausts the remainder of the file. And, this isn't what you want to do.
Here is a program that will produce the output you want. (I created a file at the top of the program. You would not use this but instead, open your file 'mylogfile.log' just as you did in your code).
#!/usr/bin/perl
use strict;
use warnings;
open my $fh, '<', \<<EOF;
<W>2016-06-25 00:27:30.577 1 => <4:(-1)> Client version 1.2.10 (Win: 1.2.10)
<W>2016-06-25 00:27:30.635 1 => <4:[AAA] User1(1850)> Authenticated
<W>2016-06-25 00:27:30.635 1 => <2:(-1)> Client version 1.2.16 (Win: 1.2.16)
<W>2016-06-25 00:27:30.687 1 => <2:[AAA] User2(942)> Authenticated
EOF
while (<$fh>) {
if (/<(\d+).+?Client version (\d+\.\d+\.\d+)/) {
my ($id, $vers) = ($1, $2);
# read next line and capture name
if (<$fh> =~ /<$id\S+ ([^(]+)/) {
my $name = $1;
print join(" : ", $id, $name, $vers), "\n";
}
}
}
In my second regular expression, the piece, [^(]+, is called a negated class. It matches non 'left parens' (1 or more times). This matches "User1' and 'User2' in the line of the file.
Update: You can find info about character classes here.
Update2: Looking at wolfrevokcats reply, I see he made a valid observation and his solution is the safer one.

Related

perl Zipping a folder without the working directory file path

I am trying to compress a folder containing files and subfolders (with files) into a single zip. I'm limited to the core perl modules so I'm trying to work with IO::Compress::Zip. I want to remove the working directory file path but seem to end up with a blank first folder before my zipped folder, like there is a trailing "/" I haven't been able to get rid of.
use Cwd;
use warnings;
use strict;
use File::Find;
use IO::Compress::Zip qw(:all);
my $cwd = getcwd();
$cwd =~ s/[\\]/\//g;
print $cwd, "\n";
my $zipdir = $cwd . "\\source_folder";
my $zip = "source_folder.zip";
my #files = ();
sub process_file {
next if (($_ eq '.') || ($_ eq '..'));
if (-d && $_ eq 'fp'){
$File::Find::prune = 1;
return;
}
push #files, $File::Find::name if -f;
}
find(\&process_file, $cwd . "\\source_folder");
zip \#files => "$zip", FilterName => sub{ s|\Q$cwd|| } or die "zip failed: $ZipError\n";
I have also attempted using the option "CanonicalName => 1, " which appears to leave the filepath except the drive letter (C:).
Substitution with
s[^$dir/][]
did nothing and
s<.*[/\\]><>
left me with no folder structure at all.
What am I missing?
UPDATE
The Red level is unexpected and is what is not required, win explorer is not able to see beyond this level.
There are two issues with your script.
First, you are mixing Windows and Linux/Unix paths in the script. Let me illustrate
I've created a subdirectory called source_folder to match your script
$ dir source_folder
Volume in drive C has no label.
Volume Serial Number is 7CF0-B66E
Directory of C:\Scratch\source_folder
26/11/2018 19:48 <DIR> .
26/11/2018 19:48 <DIR> ..
26/11/2018 17:27 840 try.pl
01/06/2018 13:02 6,653 url
2 File(s) 7,493 bytes
When I run your script unmodified I get an apparently empty zip file when I view it in Windows explorer. But, if I use a command-line unzip, I see that source_folder.zip isn't empty, but it has non-standard filenames that are part Windows and part Linux/Unix.
$ unzip -l source_folder.zip
Archive: source_folder.zip
Length Date Time Name
--------- ---------- ----- ----
840 2018-11-26 17:27 \source_folder/try.pl
6651 2018-06-01 13:02 \source_folder/url
--------- -------
7491 2 files
The mix-and-match of windows & Unix paths is created in this line of your script
find(\&process_file, $cwd . "\\source_folder");
You are concatenating a Unix-style path in $cwd with a windows part "\source_folder".
Change the line to use a forward slash, rather than a backslash to get a consistent Unix-style path.
find(\&process_file, $cwd . "/source_folder");
The second problem is this line
zip \#files => "$zip",
FilterName => sub{ s|\Q$cwd|| },
BinmodeIn =>1
or die "zip failed: $ZipError\n";
The substitute, s|\Q$cwd||, needs an extra "/", like this s|\Q$cwd/|| to make sure that the path added to the zip archive is a relative path. So the line becomes
zip \#files => "$zip", FilterName => sub{ s|\Q$cwd/|| } or die "zip failed: $ZipError\n";
Once those two changes are made I can view the zip file in Explorer and get unix-style relative paths in when I use the command-line unzip
$ unzip -l source_folder.zip
Archive: source_folder.zip
Length Date Time Name
--------- ---------- ----- ----
840 2018-11-26 17:27 source_folder/try.pl
6651 2018-06-01 13:02 source_folder/url
--------- -------
7491 2 files
This works for me:
use Cwd;
use warnings;
use strict;
use File::Find;
use IO::Compress::Zip qw(:all);
use Data::Dumper;
my $cwd = getcwd();
$cwd =~ s/[\\]/\//g;
print $cwd, "\n";
my $zipdir = $cwd . "/source_folder";
my $zip = "source_folder.zip";
my #files = ();
sub process_file {
next if (($_ eq '.') || ($_ eq '..'));
if (-d && $_ eq 'fp') {
$File::Find::prune = 1;
return;
}
push #files, $File::Find::name if -f;
}
find(\&process_file, $cwd . "/source_folder");
print Dumper \#files;
zip \#files => "$zip", FilterName => sub{ s|\Q$cwd/|| } or die "zip failed: $ZipError\n";
I changed the path seperator to '/' in your call to find() and also stripped it in the FilterName sub.
console:
C:\Users\chris\Desktop\devel\experimente>mkdir source_folder
C:\Users\chris\Desktop\devel\experimente>echo 1 > source_folder/test1.txt
C:\Users\chris\Desktop\devel\experimente>echo 1 > source_folder/test2.txt
C:\Users\chris\Desktop\devel\experimente>perl perlzip.pl
C:/Users/chris/Desktop/devel/experimente
Exiting subroutine via next at perlzip.pl line 19.
$VAR1 = [
'C:/Users/chris/Desktop/devel/experimente/source_folder/test1.txt',
'C:/Users/chris/Desktop/devel/experimente/source_folder/test2.txt'
];
C:\Users\chris\Desktop\devel\experimente>tar -tf source_folder.zip
source_folder/test1.txt
source_folder/test2.txt

How to get all info regarding the cell, based on cell_list.txt

In this case, I have two files, cell_list.txt and allcells.txt. In cell_list.txt were listed cell names that required.
For example:
cell_abc
cell_acde
c_swer
Then, i have allcells.txt which show the details of all cells which more that 100 cells details. I found that the pattern seems to be quite same which all cell details starts with ***** and ends with 'END'. For example:
*****
Lib: lib_a
Cell: cell_abc
*****
info absw ...
info swea ...
END
*****
Lib: lib_a
Cell: cell_acdd
*****
info awee ...
info awod ...
info acwe ...
END
*****
Lib: lib_b
Cell: cell_acde
*****
info wseo ...
info poee ...
info awec ...
END
*****
Lib: lib_b
Cell: c_swer
*****
info rtoe ...
info swkt ...
END
I need to get all the details based on cell listed in cell_list.txt and somehow copy to a new file for each cell, cellname.txt. Is there any way to make this works using csh or perl? Expected output as below.
Content of cell_abc.txt:
*****
Lib: lib_a
Cell: cell_abc
*****
info absw ...
info swea ...
END
Content of cell_acde.txt:
*****
Lib: lib_b
Cell: cell_acde
*****
info wseo ...
info poee ...
info awec ...
END
Content of c_swer.txt:
*****
Lib: lib_b
Cell: c_swer
*****
info rtoe ...
info swkt ...
END
This is roughly on what i have on my script now as i am not familiar with perl.
#!/usr/bin/perl
use strict;
use warnings;
my $file = 'allcells.txt';
my $list = 'cell_list.txt';
my $string;
my #matches = $file =~ m/(^\* .+? END)/g;
{
local $/=undef;
open FILE, $file or die "Couldn't open file: $!";
$string = <FILE>;
close FILE;
while(<>){
if ($string = #matches) #how to check on cell_list.txt if the cell is listed in the file or not before checking the matching string.
{
print $string; #how to extract and print the matching string to new file which will be named based on the cell name listed in cell_list.txt
}
}
}
You need to actually read in the file first, instead of trying to perform a regex match on an empty string. Iterate over the other file to populate a hash, and use hash membership to decide whether to print out a section into a new file. You can use \Q and \E inside the regex to make a literal match. The trailing /s regex flag treats the string like one long line.
#!/usr/bin/env perl
use strict;
use warnings;
my $file = 'allcells.txt';
my $list = 'cell_list.txt';
my %required_cells;
open my $fhrc, "<$list"
or die "Unable to open '$list' : $!";
while ( my $line = <$fhrc> ) {
chomp($line);
$required_cells{ $line } = 1;
}
open my $fh, "<$file"
or die "Unable to open '$file' : $!";
my $allcells_txt = do { local $/; <$fh> }; # Slurp file into a string
my #matches = $allcells_txt =~ m|\Q*****\E.+?\Q*****\E.+?END|gs;
for my $group (#matches) {
my ($cell) = $group =~ m|Cell: (\w+)|s;
if ( exists $required_cells{ $cell } ) {
print "Cell [ $cell ] is required\n";
my $out_name = "$cell.txt";
open my $out, ">$out_name"
or die "Unable to open '$out_name' for writing : $!";
print $out "$group . "\n";
close $out
or die "Unable to close '$out_name' : $!";
print "==> Created $out_name\n";
} else {
print "Skipping $cell\n";
}
}
output
Cell [ cell_abc ] is required
==> Created cell_abc.txt
Skipping cell_acdd
Cell [ cell_acde ] is required
==> Created cell_acde.txt
Cell [ c_swer ] is required
==> Created c_swer.txt

Use of uninitialized value $a in concatenation (.) or string

I am trying to remove the old files in a dir if the count is more than 3 over SSH
Kindly suggest how to resolve the issue.
Please refer the code snippet
#!/usr/bin/perl
use strict;
use warnings;
my $HOME="/opt/app/latest";
my $LIBS="${HOME}/libs";
my $LIBS_BACKUP_DIR="${HOME}/libs_backups";
my $a;
my $b;
my $c;
my $d;
my $command =qq(sudo /bin/su - jenkins -c "ssh username\#server 'my $a=ls ${LIBS_BACKUP_DIR} | wc -l;my $b=`$a`;if ($b > 3); { print " Found More than 3 back up files , removing older files..";my $c=ls -tr ${LIBS_BACKUP_DIR} | head -1;my $d=`$c`;print "Old file name $d";}else { print "No of back up files are less then 3 .";} '");
print "$command\n";
system($command);
output:
sudo /bin/su - jenkins -c "ssh username#server 'my ; =ls /opt/app/latest/libs_backups | wc -l;my ; =``;if ( > 3); { print " Found More than 3 back up files , removing older files..";my ; =ls -tr /opt/app/latest/libs_backups | head -1;my ; =``;print "Old file name ";}else { print "No of back up files are less then 3 .";} '"
Found: -c: line 0: unexpected EOF while looking for matching `''
Found: -c: line 1: syntax error: unexpected end of file
If you have three levels of escaping, you're bound to get it wrong if you do it manually. Use String::ShellQuote's shell_quote instead.
Furthermore, avoid generating code. You're bound to get it wrong! Pass the necessary information using arguments, the environment or some other channel of communication instead.
There were numerous errors in the interior Perl script on top of the fact that you tried to execute a Perl script without actually invoking perl!
#!/usr/bin/perl
use strict;
use warnings;
use String::ShellQuote qw( shell_quote );
my $HOME = "/opt/app/latest";
my $LIBS = "$HOME/libs";
my $LIBS_BACKUP_DIR = "$HOME/libs_backups";
my $perl_script = <<'__EOI__';
use strict;
use warnings;
use String::ShellQuote qw( shell_quote );
my ($LIBS_BACKUP_DIR) = #ARGV;
my $cmd = shell_quote("ls", "-tr", "--", $LIBS_BACKUP_DIR);
chomp( my #files = `$cmd` );
if (#files > 3) {
print "Found more than 3 back up files. Removing older files...\n";
print "$_\n" for #files;
} else {
print "Found three or fewer backup files.\n";
}
__EOI__
my $remote_cmd = shell_quote("perl", "-e", $perl_script, "--", $LIBS_BACKUP_DIR);
my $ssh_cmd = shell_quote("ssh", 'username#server', "--", $remote_cmd);
my $local_cmd = shell_quote("sudo", "su", "-c", $ssh_ccmd);
system($local_cmd);
I created a new file and handling the dir check and deletion logic , scp file to remote server and executing in remote server , after completion removing the file.
#!/usr/bin/perl
use strict;
use warnings;
use File::Basename;
use File::Path;
use FindBin;
use File::Copy;
my $HOME="/opt/app/test/latest";
my $LIBS_BACKUP_DIR="${HOME}/libs_backups";
my $a="ls ${LIBS_BACKUP_DIR} | wc -l";
my $b=`$a`;
my $c="ls -tr ${LIBS_BACKUP_DIR} | head -1";
my $d=`$c`;
chomp($d);
print " count : $b\n";
if ($b > 3)
{
print " Found More than 3 back up files , removing older files..\n";
print "Old file name $d\n";
my $filepath="${LIBS_BACKUP_DIR}/$d";
rmtree $filepath;
}
else
{
print "No of back up files are less then 3 .\n";
}

How can i wait until something is written to log file in my perl script

I am actually Monitoring a directory for creation of new files(.log files) these files are generated by some tool and tool writes log entries after sometime of the creation of the same file, During this time file will be empty.
and how can i wait until something is written to the log and reason being based on the log entries i will be invoking different script!,
use strict;
use warnings;
use File::Monitor;
use File::Basename;
my $script1 = "~/Desktop/parser1.pl";
my $scrip2t = "~/Desktop/parser2.pl";
my $dir = "~/Desktop/tool/logs";
sub textfile_notifier {
my ($watch_name, $event, $change) = #_;
my #new_file_paths = $change->files_created; #The change object has a property called files_created,
#which contains the names of any new files
for my $path (#new_file_paths) {
my ($base, $fname, $ext) = fileparse($path, '.log'); # $ext is "" if the '.log' extension is
# not found, otherwise it's '.log'.
if ($ext eq '.log') {
print "$path was created\n";
if(-z $path){
# i need to wait until something is written to log
}else{
my #arrr = `head -30 $path`;
foreach(#arr){
if(/Tool1/){
system("/usr/bin/perl $script1 $path \&");
}elsif(/Tool1/){
system("/usr/bin/perl $script2 $path \&");
}
}
}
}
my $monitor = File::Monitor->new();
$monitor->watch( {
name => $dir,
recurse => 1,
callback => {files_created => \&textfile_notifier}, #event => handler
} );
$monitor->scan;
while(1){
$monitor->scan;
}
Basically i am grepping some of the important information from the logs.
For such formulation of your question, something like this might help you:
use File::Tail;
# for log file $logname
my #logdata;
my $file = File::Tail->new(name => $logname, maxinterval => 1);
while (defined(my $newline = $file->read)) {
push #logdata, $newline;
# the decision to launch the script according to data in #logdata
}
Read more here
You are monitoring just the log file creation. Maybe you could use a sleep function inside the call back sub to wait for the log file been wrote. You could monitor file changes too, because some log files could be extended.

Perl file copy duplicating output

I'm attempting to write a menu driven modular perl script that will capture user input and automate the network configuration process. This script has to be able to install required Arch packages, configure AP mode, configure either DHCP or a static address for the user selected interface and give an option to enable bridging. (EDIT: The script also needs to be able to enable and configure the dhcpd service)
The part I'm stuck on right now is creating a backup of the rc.conf file, reading the file and editing the lines that need to be modified if a network interface has already been statically configured. This script is for use in ArchLinux, I did some searching around and didn't find anything that met my needs specifically.
Using generic input for
$ip = 1.1.1.1; $Bcast = 2.2.2.2; $netmask = 3.3.3.3; $GW = 4.4.4.4;
I've spent about two hours reading about file I/O and tried several things that didn't work including scrapping the multiple file IO method and using something similar to: while(<IS>){s/^interface.?=(.*)$/"interface=#if[0] \n"/;} with inputs for each of the values that need to be replaced and couldn't get it to actually do anything.
if (system ("cat","/etc/rc.conf","|","grep","interface")){
use File::Copy "cp";
$filename = "/etc/rc.conf";
$tempfile = "/etc/rc.tmp";
$bak = "/etc/rc.bak";
cp($filename,$bak);
open(IS, $filename);
open(OS, ">$tempfile");
while(<IS>){
if($_ =~ /^interface.?=(.*)$/){ print OS"interface=#if[0] \n";}
if($_ =~ /^address.?=(.*)$/){ print OS "address=$ip\n";}
if($_ =~/^netmask.?=(.*)$/){ print OS "netmask=$netmask\n";}
if($_ =~/^broadcast.?=(.*)$/){ print OS "broadcast=$Bcast\n";}
if($_ =~/^gateway.?=(.*)$/){ print OS "gateway=$GW\n"; }
else {print OS $_;}
}
close(IS); close(OS);
unlink($filename); rename($tempfile, $filename);
}
rc.conf before
#
# /etc/rc.conf - Main Configuration for Arch Linux
. /etc/archiso/functions
LOCALE_DEFAULT="en_US.UTF-8"
DAEMON_LOCALE_DEFAULT="no"
CLOCK_DEFAULT="UTC"
TIMEZONE_DEFAULT="Canada/Pacific"
KEYMAP_DEFAULT="us"
CONSOLEFONT_DEFAULT=
CONSOLEMAP_DEFAULT=
USECOLOR_DEFAULT="yes"
LOCALE="$(kernel_cmdline locale ${LOCALE_DEFAULT})"
DAEMON_LOCALE="$(kernel_cmdline daemon_locale ${DAEMON_LOCALE_DEFAULT})"
HARDWARECLOCK="$(kernel_cmdline clock ${CLOCK_DEFAULT})"
TIMEZONE="$(kernel_cmdline timezone ${TIMEZONE_DEFAULT})"
KEYMAP="$(kernel_cmdline keymap ${KEYMAP_DEFAULT})"
CONSOLEFONT="$(kernel_cmdline consolefont ${CONSOLEFONT_DEFAULT})"
CONSOLEMAP="$(kernel_cmdline consolemap ${CONSOLEMAP_DEFAULT})"
USECOLOR="$(kernel_cmdline usecolor ${USECOLOR_DEFAULT})"
MODULES=()
UDEV_TIMEOUT=30
USEDMRAID="no"
USEBTRFS="no"
USELVM="no"
HOSTNAME="archiso"
DAEMONS=(hwclock syslog-ng)
interface=eth0
address=192.168.0.99
netmask=255.255.255.0
broadcast=192.168.0.255
gateway=192.168.0.1
rc.conf after
#
# /etc/rc.conf - Main Configuration for Arch Linux
. /etc/archiso/functions
LOCALE_DEFAULT="en_US.UTF-8"
DAEMON_LOCALE_DEFAULT="no"
CLOCK_DEFAULT="UTC"
TIMEZONE_DEFAULT="Canada/Pacific"
KEYMAP_DEFAULT="us"
CONSOLEFONT_DEFAULT=
CONSOLEMAP_DEFAULT=
USECOLOR_DEFAULT="yes"
LOCALE="$(kernel_cmdline locale ${LOCALE_DEFAULT})"
DAEMON_LOCALE="$(kernel_cmdline daemon_locale ${DAEMON_LOCALE_DEFAULT})"
HARDWARECLOCK="$(kernel_cmdline clock ${CLOCK_DEFAULT})"
TIMEZONE="$(kernel_cmdline timezone ${TIMEZONE_DEFAULT})"
KEYMAP="$(kernel_cmdline keymap ${KEYMAP_DEFAULT})"
CONSOLEFONT="$(kernel_cmdline consolefont ${CONSOLEFONT_DEFAULT})"
CONSOLEMAP="$(kernel_cmdline consolemap ${CONSOLEMAP_DEFAULT})"
USECOLOR="$(kernel_cmdline usecolor ${USECOLOR_DEFAULT})"
MODULES=()
UDEV_TIMEOUT=30
USEDMRAID="no"
USEBTRFS="no"
USELVM="no"
HOSTNAME="archiso"
DAEMONS=(hwclock syslog-ng)
interface=eth0
interface=eth0
address=1.1.1.1
address=192.168.0.99
netmask=3.3.3.3
netmask=255.255.255.0
broadcast=2.2.2.2
broadcast=192.168.0.255
gateway=4.4.4.4
I am not going to comment on the wisdom of the rest of your script, but you have:
if (system ("cat","/etc/rc.conf","|","grep","interface")){
system returns 0 on success.
So, you'll enter the block only if that system call fails.
If fact, I am on a Windows system right now with no /etc/rc.conf (but cat and grep thanks to Cygwin. Running the following script:
#!/usr/bin/env perl
use strict; use warnings;
if (system ("cat","/etc/rc.conf","|","grep","interface")){
print "*** it worked! ***\n";
if ($? == -1) {
print "failed to execute: $!\n";
}
elsif ($? & 127) {
printf "child died with signal %d, %s coredump\n",
($? & 127), ($? & 128) ? 'with' : 'without';
}
else {
printf "child exited with value %d\n", $? >> 8;
}
}
produces the output:
cat: /etc/rc.conf: No such file or directory
cat: |: No such file or directory
cat: grep: No such file or directory
cat: interface: No such file or directory
*** it worked! ***
child exited with value 1
That means system returned a failure code. Now, if you want to use shell piping and redirection, you should pass system a string, not a list, and check like this:
if (system ('cat /etc/rc.conf | grep interface') == 0) {
On the other hand, I would rather not trust shells propagating exit status.
The following should point you in a better direction:
#!/usr/bin/env perl
use strict;use warnings;
my %lookup = (
eth0 => {
address => '1.1.1.1',
broadcast => '2.2.2.2',
netmask => '3.3.3.3',
gateway => '4.4.4.4',
},
wlan0 => {
address => '5.5.5.5',
broadcast => '6.6.6.6',
netmask => '7.7.7.7',
gateway => '8.8.8.8',
},
);
while (my $line = <DATA>) {
if (my ($interface) = ($line =~ /^interface=(\S+)/)) {
print $line;
if (exists $lookup{$interface}) {
$line = process_interface(\*DATA, $lookup{$interface});
redo;
}
}
else {
print $line;
}
}
sub process_interface {
my ($fh, $lookup) = #_;
my $keys = join '|', sort keys %$lookup;
while (my $line = <DATA>) {
$line =~ s/\A($keys)=.+/$1=$lookup->{$1}/
or return $line;
print $line;
}
return;
}
__DATA__
#
# /etc/rc.conf - Main Configuration for Arch Linux
. /etc/archiso/functions
# stuff
interface=eth0
address=192.168.0.99
netmask=255.255.255.0
broadcast=192.168.0.255
gateway=192.168.0.1
interface=wlan0
address=192.168.0.99
netmask=255.255.255.0
broadcast=192.168.0.255
gateway=192.168.0.1
Output:
#
# /etc/rc.conf - Main Configuration for Arch Linux
. /etc/archiso/functions
# stuff
interface=eth0
address=1.1.1.1
netmask=3.3.3.3
broadcast=2.2.2.2
gateway=4.4.4.4
interface=wlan0
address=5.5.5.5
netmask=7.7.7.7
broadcast=6.6.6.6
gateway=8.8.8.8
The problem is your if/if/if/if/if/else chain, which should be an if/elsif/elsif/elsif/elsif/else chain. The else { print OS $_ } triggers on every line that doesn't match gateway=, including the ones that match interface, address, etc.