How can I calculate time different from file in perl - regex

this is my first post so please bear with me, I want to calculate the time different from ssh log file which have format like this
Jan 10 hr:min:sec Failed password for invalid user root from "ip" port xxx ssh2
Jan 10 hr:min:sec sshd[]: User root from "ip" not allowed because none of user's groups are listed in AllowGroups
The script will alert when user fail to login x times within 10 minutes, any can please teach me how to do this?
Thanks!!

Your specification is a little ambiguous - presumably there are going to be more than two lines in your log file - do you want the time difference between successive lines? Do you want a line parsed searching for a keyword (such as "failed login") and then the time difference to a different line similarly parsed?
Since I can't tell from what you've provided, I'm simply going to presume that there are two lines in a file which have a date at the start of each line and you want the time difference between those dates. You can then manipulate to do what you want. Alternatively, add to your question and define exactly what a "failed login" is.
There are many ways to skin this cat but I prefer the strptime function from DateTime::Format::Strptime which is described as;
This module implements most of strptime(3), the POSIX function that is the reverse of strftime(3), for DateTime. While strftime takes a DateTime and a pattern and returns a string, strptime takes a string and a pattern and returns the DateTime object associated.
This will do as I've described above;
use v5.12;
use DateTime::Format::Strptime;
my $logfile = "ssh.log";
open(my $fh, '<', $logfile) or die "$logfile: $!";
my $strp = DateTime::Format::Strptime->new(
pattern => '%h %d %T ',
time_zone => 'local',
on_error => 'croak',
);
my $dt1 = $strp->parse_datetime(scalar <$fh>);
my $dt2 = $strp->parse_datetime(scalar <$fh>);
my $duration = $dt2->subtract_datetime_absolute($dt1);
say "Number of seconds difference is: ", $duration->seconds ;
#
# Input
# Jan 10 14:03:18 Failed password for invalid user root from "ip" port xxx ssh2
# Jan 10 14:03:22 sshd[]: User root from "ip" not allowed because none of user's groups are listed in AllowGroups
#
# Outputs:
# Number of seconds difference is: 4
A more comprehensive answer (making even more assumptions) is below:
use v5.12;
use DateTime::Format::Strptime;
use DateTime::Duration;
my $logfile = "ssh.log";
my $maximum_failures_allowed = 3 ;
my $minimum_time_frame = 10 * 60; # in seconds
my $limit = $maximum_failures_allowed - 1 ;
# The following is a list of rules indicating a
# failed login for the username captured in $1
my #rules = (
qr/Failed password for user (\S+) / ,
qr/User (\S+) from [\d\.]+ not allowed/
);
my $strp = DateTime::Format::Strptime->new(
pattern => '%h %d %T ',
time_zone => 'local',
on_error => 'croak',
);
my %usernames ;
open(my $fh, '<', $logfile) or die "$logfile: $!";
while (<$fh>) {
for my $rx (#rules) {
if ( /$rx/ ) {
# rule matched -> login fail for $1. Save log line.
my $user = $1 ;
push #{ $usernames{$user} } , $_ ;
# No point checking other rules...
last ;
}
}
}
close $fh ;
for my $user (keys %usernames) {
my #failed_logins = #{ $usernames{$user} };
# prime the loop; we know there is at least one failed login
my $this_line = shift #failed_logins ;
while ( #failed_logins > $limit ) {
my $other_line = $failed_logins[ $limit ] ;
my $this_time = $strp->parse_datetime($this_line) ;
my $other_time = $strp->parse_datetime($other_line) ;
# this produces a DateTime::Duration object with the difference in seconds
my $time_frame = $other_time->subtract_datetime_absolute( $this_time );
if ($time_frame->seconds < $minimum_time_frame) {
say "User $user had login failures at the following times:" ;
print " $_" for $this_line, #failed_logins[ 0 .. $limit ] ;
# (s)he may have more failures but let's not labour the point
last ;
}
# Here if user had too many failures but within a reasonable time frame
# Continue to move through the array of failures checking time frames
$this_line = shift #failed_logins ;
}
}
exit 0;
Ran on this data;
Jan 10 14:03:18 sshd[15798]: Failed password for user root from "ip" port xxx ssh2
Jan 10 14:03:22 sshd[15798]: User root from 188.124.3.41 not allowed because none of user's groups are listed in AllowGroups
Jan 10 20:31:12 sshd[15798]: Connection from 188.124.3.41 port 32889
Jan 10 20:31:14 sshd[15798]: Failed password for user root from 188.124.3.41 port 32889 ssh2
Jan 10 20:31:14 sshd[29323]: Received disconnect from 188.124.3.41: 11: Bye Bye
Jan 10 22:04:56 sshd[25438]: Connection from 200.54.84.233 port 45196
Jan 10 22:04:58 sshd[25438]: Failed password for user root from 200.54.84.233 port 45196 ssh2
Jan 10 22:04:58 sshd[30487]: Received disconnect from 200.54.84.233: 11: Bye Bye
Jan 10 22:04:59 sshd[21358]: Connection from 200.54.84.233 port 45528
Jan 10 22:05:01 sshd[21358]: Failed password for user root from 200.54.84.233 port 45528 ssh2
Jan 10 22:05:02 sshd[2624]: Received disconnect from 200.54.84.233: 11: Bye Bye
Jan 10 22:05:29 sshd[21358]: Connection from 200.54.84.233 port 45528
Jan 10 22:05:30 sshd[21358]: Failed password for user root from 200.54.84.233 port 45528 ssh2
Jan 10 22:05:33 sshd[2624]: Received disconnect from 200.54.84.233: 11: Bye Bye
Jan 10 22:06:49 sshd[21358]: Connection from 200.54.84.233 port 45528
Jan 10 22:06:51 sshd[21358]: Failed password for user root from 200.54.84.233 port 45528 ssh2
Jan 10 22:06:51 sshd[2624]: Received disconnect from 200.54.84.233: 11: Bye Bye
... it produces this output;
User root had login failures at the following times:
Jan 10 22:04:58 sshd[25438]: Failed password for user root from 200.54.84.233 port 45196 ssh2
Jan 10 22:05:01 sshd[21358]: Failed password for user root from 200.54.84.233 port 45528 ssh2
Jan 10 22:05:30 sshd[21358]: Failed password for user root from 200.54.84.233 port 45528 ssh2
Jan 10 22:06:51 sshd[21358]: Failed password for user root from 200.54.84.233 port 45528 ssh2
Note that the timezone and/or offset is not present in the logfile data - so, there is no way for this script to work correctly on the day you enter or leave "Daylight Savings."

Related

PAM Authentication failure for root during pexpect python

the below observation is not always the case, but after some time accessing the SUT several times with ssh with root user and correct password the python code gets into trouble with:
Apr 25 05:51:56 SUT sshd[31570]: pam_tally2(sshd:auth): user root (0) tally 83, deny 10
Apr 25 05:52:16 SUT sshd[31598]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.10.10.13 user=root
Apr 25 05:52:21 SUT sshd[31568]: error: PAM: Authentication failure for root from 10.10.10.13
Apr 25 05:52:21 SUT sshd[31568]: Connection closed by 10.10.10.13 [preauth]
This is the below python code:
COMMAND_PROMPT = '.*:~ #'
SSH_NEWKEY = '(?i)are you sure you want to continue connecting'
def scp(source, dest, password):
cmd = 'scp ' + source + ' ' + dest
try:
child = pexpect.spawn('/bin/bash', ['-c', cmd], timeout=None)
res = child.expect([pexpect.TIMEOUT, SSH_NEWKEY, COMMAND_PROMPT, '(?i)Password'])
if res == 0:
print('TIMEOUT Occurred.')
if res == 1:
child.sendline('yes')
child.expect('(?i)Password')
child.sendline(password)
child.expect([pexpect.EOF], timeout=60)
if res == 2:
pass
if res == 3:
child.sendline(password)
child.expect([pexpect.EOF], timeout=60)
except:
print('File not copied!!!')
self.logger.error(str(self.child))
When the ssh is unsuccessful, this is the pexpect printout:
version: 2.3 ($Revision: 399 $)
command: /usr/bin/ssh
args: ['/usr/bin/ssh', 'root#100.100.100.100']
searcher: searcher_re:
0: re.compile(".*:~ #")
buffer (last 100 chars): :
Account locked due to 757 failed logins
Password:
before (last 100 chars): :
Account locked due to 757 failed logins
Password:
after: <class 'pexpect.TIMEOUT'>
match: None
match_index: None
exitstatus: None
flag_eof: False
pid: 2284
child_fd: 5
closed: False
timeout: 30
delimiter: <class 'pexpect.EOF'>
logfile: None
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0
delayafterclose: 0.1
delayafterterminate: 0.1
Any clue maybe what could it be, is it maybe anything missing or wrong configured for pam authentication on my SUT? The problem is that when the SUT starts with this pam failures then python code will always have the problem and only a reboot of the SUT seems to help :(
Manually accessing the SUT via ssh root#... is always working, even if pexpect can't!!! The account seems not to be locked according to:
SUT:~ # passwd -S root
root P 04/24/2017 -1 -1 -1 -1
I have looked into some other questions but no real solution is mentioned or could work with my python code.
Thanks in adv.
My work around is to modify for testing purpose the pam_tally configuration files. It seems that the SUT acknowledge the multiple access as a threat and locks even the root account!
By removing this entry even_deny_root root_unlock_time=5 in the several pam_tally configuration files:
/etc/pam.d/common-account:account required pam_tally2.so deny=10 onerr=fail unlock_time=600 even_deny_root root_unlock_time=5 file=/home/test/faillog
/etc/pam.d/common-auth:auth required pam_tally2.so deny=10 onerr=fail unlock_time=600 even_deny_root root_unlock_time=5 file=/home/test/faillog
Those changes will be activated dynamically no restart of service needed!
Note: after reboot those entries will be most likely back!

Parsing and normalize the file containing 2-3 millions of line in Perl

I have the log file which contains millions (2-4) of line containing the some special information like IPs, Ports, Email Ids, domains, PIDs etc.
I need to parse and normalize the file in a such way that all of above special tokens will be replaced by some constant string like IP, PORT, EMAIL, DOMAIN etc. and need to provide the count of all duplicates lines.
i.e., for the file having content like below -
Aug 19 10:22:48 user 10.1.1.1 is not reachable
Aug 19 10:22:48 user 10.1.3.1 is not reachable
Aug 19 10:22:48 user 10.1.4.1 is not reachable
Aug 19 10:22:48 user 10.1.1.5 is not reachable
Aug 19 10:22:48 user 10.1.1.6 is not reachable
Aug 19 10:22:48 user 10.1.1.4 is not reachable
Aug 19 10:22:48 user 10.1.1.1 is not reachable
Aug 19 10:22:48 user 10.1.1.1 is not reachable
Aug 19 10:22:48 user 10.1.1.4 is not reachable
Aug 19 10:22:48 user 10.1.1.4 is not reachable
Aug 19 10:22:48 user 10.1.1.1 is not reachable
Aug 19 10:22:48 user 10.1.1.6 is not reachable
Aug 19 10:22:48 user 10.1.1.6 is not reachable
Aug 19 10:22:48 user 10.1.1.6 is not reachable
The normalize output will be -
MONTH DAY TIME user IP is not reachable =======> Count = 14
The log line can have multiple tokens to be search and replaced like domains, email ids.
The below code i have written is taking 16 minutes for 10MB of log file( used mail server logs )
Is it possible to minimize that time in Perl when you have to parse that many of line with some regex and substitution operation to perform.
The code snippet i have wrote is -
use strict;
use warnings;
use Tie::Hash::Sorted;
use Getopt::Long;
use Regexp::Common qw(net URI Email::Address );
use Email::Address;
my $ignore = 0;
my $threshold = 0;
my $normalize = 0;
GetOptions(
'ignore=s' => \$ignore,
'threshold=i' => \$threshold,
'normalize=i' => \$normalize,
);
my ( %initial_log, %Logs, %final_logs );
my ( $total_lines, $threshold_value );
my $file = shift or die "Usage: $0 FILE\n";
open my $fh, '<', $file or die "Could not open '$file' $!";
#Sort the results according to frequency
my $sort_by_numeric_value = sub {
my $hash = shift;
[ sort { $hash->{$b} <=> $hash->{$a} } keys %$hash ];
};
#Ignore "ignore" number fields from each line
while ( my $line = <$fh> ) {
my $skip_words = $ignore;
chomp $line;
$total_lines++;
if ($ignore) {
my #arr = split( /[\s\t]+/smx, $line );
while ( $skip_words-- != 0 ) { shift #arr; }
my $n_line = join( ' ', #arr );
$line = $n_line;
}
$initial_log{$line}++;
}
close $fh or die "unable to close: $!";
$threshold_value = int( ( $total_lines / 100 ) * $threshold );
tie my %sorted_init_logs, 'Tie::Hash::Sorted',
'Hash' => \%initial_log,
'Sort_Routine' => $sort_by_numeric_value;
%final_logs = %sorted_init_logs;
if ($normalize) {
# Normalize the logs
while ( my ( $line, $count ) = ( each %final_logs ) ) {
$line = normalize($line);
$Logs{$line} += $count;
}
%final_logs = %Logs;
}
tie my %sorted_logs, 'Tie::Hash::Sorted',
'Hash' => \%final_logs,
'Sort_Routine' => $sort_by_numeric_value;
my $reduced_lines = values(%final_logs);
my $reduction = int( 100 - ( ( values(%final_logs) / $total_lines ) * 100 ) );
print("Number of line in the original logs = $total_lines");
print("Number of line in the normalized logs = $reduced_lines");
print("Logs reduced after normalization = $reduction%\n");
# Show the logs below threshold value only
while ( my ( $log, $count ) = ( each %sorted_logs ) ) {
if ( $count >= $threshold_value ) {
printf "%-80s ===========> [%s]\n", $log, $sorted_logs{$log};
}
}
sub normalize {
my $input = shift;
# Remove unwanted charecters
$input =~ s/[()]//smxg;
# Normalize the URI
$input =~ s/$RE{URI}{HTTP}/URI/smxg;
# Normalize the IP Addresses
$input =~ s/$RE{net}{IPv4}/IP/smgx;
$input =~ s/IP(\W+)\d+/IP$1PORT/smxg;
$input =~ s/$RE{net}{IPv4}{hex}/HEX_IP/smxg;
$input =~ s/$RE{net}{IPv4}{bin}/BINARY_IP/smxg;
$input =~ s/\b$RE{net}{MAC}\b/MAC/smxg;
# Normalize the Email Addresses
$input =~ s/(\w+)=$RE{Email}{Address}/$1=EMAIL/smxg;
$input =~ s/$RE{Email}{Address}/EMAIL/smxg;
# Normalize the Domain name
$input =~ s/[A-Za-z0-9-]+(\.[A-Za-z0-9-]+)*(?:\.[A-Za-z]{2,})/HOSTNAME/smxg;
return $input;
}
Especially if you do not know the exact types of queries you'll need to perform, you would be much better off putting parsed log data into an SQLite database. The following example illustrates this using a temporary database. If you want to run multiple different queries against the same data, parse once, load them up in the database, then query to your heart's content. This ought to be faster than what you are doing right now, but, obviously I haven't measured anything:
#!/usr/bin/env perl
use strict;
use warnings;
use DBI;
my $dbh = DBI->connect('dbi:SQLite::memory:', undef, undef,
{
RaiseError => 1,
AutoCommit => 0,
}
);
$dbh->do(q{
CREATE TABLE 'status' (
id integer primary key,
month char(3),
day char(2),
time char(8),
agent varchar(100),
ip char(15),
status varchar(100)
)
});
$dbh->commit;
my #cols = qw(month day time agent ip status);
my $inserter = $dbh->prepare(sprintf
q{INSERT INTO 'status' (%s) VALUES (%s)},
join(',', #cols),
join(',', ('?') x #cols)
);
while (my $line = <DATA>) {
$line =~ s/\s+\z//;
$inserter->execute(split ' ', $line, scalar #cols);
}
$dbh->commit;
my $summarizer = $dbh->prepare(q{
SELECT
month,
day,
time,
agent,
ip,
status,
count(*) as count
FROM status
GROUP BY month, day, time, agent, ip, status
}
);
$summarizer->execute;
my $result = $summarizer->fetchall_arrayref;
print "#$_\n" for #$result;
$dbh->disconnect;
__DATA__
Aug 19 10:22:48 user 10.1.1.1 is not reachable
Aug 19 10:22:48 user 10.1.3.1 is not reachable
Aug 19 10:22:48 user 10.1.4.1 is not reachable
Aug 19 10:22:48 user 10.1.1.5 is not reachable
Aug 19 10:22:48 user 10.1.1.6 is not reachable
Aug 19 10:22:48 user 10.1.1.4 is not reachable
Aug 19 10:22:48 user 10.1.1.1 is not reachable
Aug 19 10:22:48 user 10.1.1.1 is not reachable
Aug 19 10:22:48 user 10.1.1.4 is not reachable
Aug 19 10:22:48 user 10.1.1.4 is not reachable
Aug 19 10:22:48 user 10.1.1.1 is not reachable
Aug 19 10:22:48 user 10.1.1.6 is not reachable
Aug 19 10:22:48 user 10.1.1.6 is not reachable
Aug 19 10:22:48 user 10.1.1.6 is not reachable
Output:
Aug 19 10:22:48 user 10.1.1.1 is not reachable 4
Aug 19 10:22:48 user 10.1.1.4 is not reachable 3
Aug 19 10:22:48 user 10.1.1.5 is not reachable 1
Aug 19 10:22:48 user 10.1.1.6 is not reachable 5
Aug 19 10:22:48 user 10.1.3.1 is not reachable 1
Aug 19 10:22:48 user 10.1.4.1 is not reachable 1

TCL_REGEXP::How to grep 5 diifferent words from a variable using TCL regexp. And how to send greeped output to each column of excel sheet?

My TCL script:
set line {
Jul 24 21:06:40 2014: %AUTH-6-INFO: login[1765]: user 'admin' on 'pts/1' logged
Jul 24 21:05:15 2014: %DATAPLANE-5-: Unrecognized HTTP URL www.58.net. Flow: 0x2
Jul 24 21:04:39 2014: %DATAPLANE-5-: Unrecognized HTTP URL static.58.com. Flow:
Jul 24 21:04:38 2014: %DATAPLANE-5-: Unrecognized HTTP URL www.google-analytics.
com. Flow: 0x2265394048.
Jul 24 21:04:36 2014: %DATAPLANE-5-: Unrecognized HTTP URL track.58.co.in. Flow: 0
Jul 24 21:04:38 2014: %DATAPLANE-5-:Unrecognized HTTP URL www.google.co.in. Flow: 0x87078800
Jul 24 21:04:38 2014: %DATAPLANE-5-:CCB:44:Unrecognized Client Hello ServerName www.google.co.in. Flow: 0x87073880. len_analyzed: 183
Jul 24 21:04:38 2014: %DATAPLANE-5-:CCB:44:Unrecognized Server Hello ServerName test1. Flow: 0x87073880, len_analyzed 99
Jul 24 21:04:38 2014: %DATAPLANE-5-:CCB:44:Unrecognized Server Cert CommonName *.google.com. Flow: 0x87073880
Jul 24 21:04:38 2014: %DATAPLANE-5-:CCB:44:Searching rname(TYPE_A) cs50.wac.edgecastcdn.net in dns_hash_table
Jul 24 21:04:38 2014: %DATAPLANE-5-:Unrecognized HTTP URL www.facebook.com. Flow: 0x87078800
Jul 24 21:04:38 2014: %DATAPLANE-5-:CCB:44:Unrecognized Client Hello ServerName www.fb.com. Flow: 0x87073880. len_analyzed: 183
Jul 24 21:05:38 2014: %DATAPLANE-5-:CCB:44:Unrecognized Server Hello ServerName test. Flow: 0x87073880, len_analyzed 99
Jul 24 21:04:38 2014: %DATAPLANE-5-:CCB:44:Unrecognized Server Cert CommonName *.facebook.com. Flow: 0x87073880
Jul 24 21:05:39 2014: %DATAPLANE-5-:CCB:44:Searching rname(TYPE_A) cs50.wac.facebook.net in dns_hash_table
}
set urls [list]
foreach item [regexp -all -inline {URL\s+\S+} $line] {
lappend urls [lindex $item 1]
}
#puts $res
set s "*****************************************************"
set f {}
set f [open output.txt a]
if {$f ne {}} {
foreach url $urls {
chan puts $f $url
}
chan puts $f $s
chan close $f
}
My Requirement:
REQ 1. I need to grep the following things from $line variable.
URL www.58.net
Client Hello ServerName www.google.co.in.
Server Hello ServerName test1
Server Cert CommonName *.google.com.
rname(TYPE_A) cs50.wac.edgecastcdn.net
URL, Client Hello ServerName, Server Hello ServerName, Server Cert CommonName, rname are the common fields. Need to grep the words whatever appearing after that as shown above.
REQ 2. When I browse a URL, Iam getting the contents of $line. When I open a URL, my script should automatically grep the above things, and store in MS Excel file.There should be 5 columns in excel sheet each for one field. When "URL" found in a $line, it should Go and sit into column 1 of excel sheet. When "Client Hello ServerName" found, it should be moved to column 2 of excel sheet. Like this I want to upload all 5 datas to excel sheet.
Using my script provided above, I am able to grep URL's and able to upload into an .txt file.
Please guide me your ideas. Thanks a lot in advance.
Thanks,
Balu P.
Like most RE engines, Tcl's allows alternation through the use of the | operator. This lets you do:
# This is using expanded syntax
foreach {whole type payload} [regexp -all -inline {(?x)
\y ( URL
| (?: Client | Server)[ ]Hello[ ]ServerName
| Server[ ]Cert[ ]CommonName
| rname\(TYPE_A\) )
\s+ (\S+)
} $line] {
puts "type = $type"
puts "payload = [string trimright $payload .]"
}
(The tricky bits: \y means “word boundary”, and real spaces have to be written as [ ] because of expanded mode swallowing whitespace otherwise.)
When I try with your data, I get this output (two output lines per matched input line):
type = URL
payload = www.58.net
type = URL
payload = static.58.com
type = URL
payload = www.google-analytics
type = URL
payload = track.58.co.in
type = URL
payload = www.google.co.in
type = Client Hello ServerName
payload = www.google.co.in
type = Server Hello ServerName
payload = test1
type = Server Cert CommonName
payload = *.google.com
type = rname(TYPE_A)
payload = cs50.wac.edgecastcdn.net
type = URL
payload = www.facebook.com
type = Client Hello ServerName
payload = www.fb.com
type = Server Hello ServerName
payload = test
type = Server Cert CommonName
payload = *.facebook.com
type = rname(TYPE_A)
payload = cs50.wac.facebook.net
I don't know if this is exactly what you want, but it's very close.
For the second question, you need to either generate a CSV file (Tcl's got a package for that in the community library, tcllib) or to use COM to talk to Excel and manipulate things in there directly (the Tcom package is the generally recommended approach there). Which is best will depend on factors that you are not telling us; you should ask that as a separate question while explaining what the situation is (e.g., is there an existing spreadsheet or will the spreadsheet be created de novo.)
you can write a procedure & pass the type as filename & payload as data to be written. I have wrote one below.
proc type2file {filename payload} {
set name [string trim $filename].txt
set fp [open $name a]
puts $fp $payload
close $fp
}
call this inside for loop. Please let me know if it works for you.

Perl Regex issues

why isn't this perl REGEX working? i'm grabbing the date and username (date works fine), but it will grab all the usernames then when it hits bob.thomas and grabs the entire line
Code:
m/^(.+)\s-\sUser\s(.+)\s/;
print "$2------\n";
Sample Data:
Feb 17, 2013 12:18:02 AM - User plasma has logged on to client from host
Feb 17, 2013 12:13:00 AM - User technician has logged on to client from host
Feb 17, 2013 12:09:53 AM - User john.doe has logged on to client from host
Feb 17, 2013 12:07:28 AM - User terry has logged on to client from host
Feb 17, 2013 12:04:10 AM - User bob.thomas has been logged off from host because its web server session timed out. This means the web server has not received a request from the client in 3 minute(s). Possible causes: the client process was killed, the client process is hung, or a network problem is preventing access to the web server.
for the user that asked for the full code
open (FILE, "log") or die print "couldn't open file";
$record=0;
$first=1;
while (<FILE>)
{
if(m/(.+)\sto now/ && $first==1) # find the area to start recording
{
$record=1;
$first=0;
}
if($record==1)
{
m/^(.+)\s-\sUser\s(.+)\s/;
<STDIN>;
print "$2------\n";
if(!exists $user{$2})
{
$users{$2}=$1;
}
}
}
.+ is greedy, it matches the longest possible string. If you want it to match the shortest, use .+?:
/^(.+)\s-\sUser\s(.+?)\s/;
Or use a regexp that doesn't match whitespace:
/^(.+)\s-\sUser\s(\S+)/;
Use the reluctant/ungreedy quantifier to match up until the first occurrence rather than the last. You should do this in both cases just in case the "User" line also has " - User "
m/^(.+?)\s-\sUser\s(.+?)\s/;

unexpected EOF while looking for matching `' '

#!/usr/bin/perl
use warnings;
while(1){
system ( "dialog --menu Customize 30 80 60 "
. "'Show rules' 'Show all the current rules' "
. "'Flush rules' 'Flush all the tables' "
. "Allow IP' 'Block all except one IP' "
. "'Block IP' 'Block all the packets from an IP' "
. "'Block MAC' 'Block using the hardware address' "
. "'Private networking' 'Allow only one network and block other networks' "
. "'Allow lo' 'Allow local network interface' "
. "'Save' 'Save customized rules' "
. "'Exit' 'Close the program' "
. "'more options' '........' 2> /tmp/customize.txt");
open FILE4, "/tmp/customize.txt" or die $!;
chomp(my $customize = <FILE4>);
#SHOW RULES
if($customize =~ /Show rules/){
`iptables -nvL | tee /tmp/nvl.txt`;
system ("dialog --textbox /tmp/nvl.txt 22 70");
}
#FLUSH RULES
elsif($customize =~ /Flush rules/){
`iptables -F`;
system ("dialog --infobox 'All tables have been flushed.' 05 35");
sleep 2;
}
#ALLOW IP
elsif($customize =~ /Allow IP/){
system ("dialog --inputbox 'Enter the IP address of the sysetm which you want to allow:' 15 40 2> /tmp/allowIP.txt");
open FILE7, "/tmp/allowIP.txt" or die $!;
chomp(my $aip = <FILE7>);
`iptables -I INPUT -s $aip -j DROP`;
system ("dialog --infobox 'IP address $aip is allowed and rest are blocked' 05 45");
sleep 2;
}
#BLOCK IP
elsif($customize =~ /Block IP/){
system ("dialog --inputbox 'Enter the IP address of the system which you want to block:' 15 40 2> /tmp/blockIP.txt");
open FILE5, "/tmp/blockIP.txt" or die $!;
chomp(my $ip = <FILE5>);
`iptables -A INPUT -s $ip -j DROP`;
system ("dialog --infobox 'IP address $ip has been blocked!' 05 35");
sleep 2;
}
#PRIVATE NETWORK
elsif($customize =~ /Private networking/){
system ("dialog --inputbox 'Enter the network address which you want to allow (eg. 192.168.0.0/24)' 15 40 2> /tmp/network.txt");
open FILE6, "/tmp/network.txt" or die $!;
chomp(my $network = <FILE6>);
`iptables -I INPUT -s $network -j ACCEPT`;
system ("dialog --infobox 'Network $network is allowed and rest networks are blocked' 05 35");
sleep 2;
}
#ALLOW LO
elsif($customize =~ /Allow lo/){
`iptables -I INPUT -i lo -j ACCEPT`;
system ("dialog --infobox 'Local interface is allowed.' 05 35");
sleep 2;
}
#SAVE
elsif($customize =~ /Save/){
`service iptables save`;
system ("dialog --infobox 'All rules have been saved successfully' 05 45");
sleep 2;
}
#EXIT
elsif($customize =~ /Exit/){
system ("dialog --infobox 'Closing application.' 05 35");
sleep 2;
exit 0;
}
else{
exit;
}
}
perl file.plx
error:
sh: -c: line 0: unexpected EOF while looking for matching `''
sh: -c: line 1: syntax error: unexpected end of file
How do I resolve this error?
Missing ' here: "Allow IP'
Your forgot a ' in
. "Allow IP' 'Block all except one IP' "
before Allow IP' in line 7 of your Perl code.