Telnet SMTP with expect or shell script - regex

Want to build up a Auth Smtp Connection with expect script... just to test I wanted to get ehlo parameters but expect is not working like this
#!/usr/bin/expect
set timeout -1
set smtp [lindex $argv 0]
set port [lindex $argv 1]
spawn telnet $smtp $port
expect "[2]{2,}[0]{1,}"
send "ehlo\n"
I expect the code 220 to come from mailserver to continue to send ehlo ... just like
..../...:telnet smtp.mail.yahoo.de 25
Trying 77.238.184.85...
Connected to smtp2-de.mail.vip.ukl.yahoo.com.
Escape character is '^]'.
220 smtp116.mail.ukl.yahoo.com ESMTP
ehlo
250-smtp116.mail.ukl.yahoo.com
250-AUTH LOGIN PLAIN XYMCOOKIE
250-PIPELINING
250-SIZE 41697280
250 8BITMIME
error saying:
spawn telnet smtp.mail.yahoo.de 25
invalid command name "2"
while executing
"2"
invoked from within
"expect "[2]{2,}[0]{1,}""
(file "./login.exp" line 6)
if I just write expect "220" instead of expect "[2]{2,}[0]{1,}" it works but ignors send "ehlo\n"

As above adviced I used exp_internal 1 to get sense of what expects really listen to...
Also I can recommend autoexpect which created the expect script not perfectly but after improving some codings it is a real help and at last it worked.
#!/usr/bin/expect
#exp_internal 1
set timeout -1
set smtp [lindex $argv 0]
set port [lindex $argv 1]
spawn telnet $smtp $port
expect -re {[2]{2,}[0]{1,}}
sleep 3;
send -- "Ehlo\r"
expect -re {[2]{1,}[5]{1,}[0]{1,}}
send -- "quit\r"
expect eof

You need to send a newline after sending "ehlo":
send "ehlo\n"
EDIT: Based on your latest edit, you also have to escape the leading bracket in your regex to prevent tcp from trying to interpret it as a command:
expect "\[2]{2,}\[0]{1,}"
EDIT: Also, your expect line isn't actually matching what you think it is. At this point, I'd suggest running through on of the many tutorials on expect, or simply use autoexpect to generate your script.

Related

Test::More failing test with equal strings

Using perl 5.16 with the latest Test::More module.
I got a unit test being called like
is($ret, "ssh: connect to host 1.2.3.187 port 22: Network is unreachable\nCouldn't read packet: Connection reset by peer\n", "putFileWithSFTP bad server ret test");
Running the test fails with the following result:
# Failed test 'putFileWithSFTP bad server ret test'
# at t/Backup.t line 891.
# got: 'ssh: connect to host 1.2.3.187 port 22: Network is unreachable
# Couldn't read packet: Connection reset by peer
# '
# expected: 'ssh: connect to host 1.2.3.187 port 22: Network is unreachable
# Couldn't read packet: Connection reset by peer
# '
The strings should be equal and they also look like they are equal. What could be the cause of this?
I figured it out:
When comparing the two strings with a hexdump, I noticed that, for whatever reason, the ssh ouput in $ret contains MSWin Line Endings, which I didn't include in my comparison.
More can be read here

How to execute command on multiple servers for executing a command

i have set of servers (150) for logging and a command (to get disk space). How can i execute this command for each server.
Suppose if script is taking 1 min to get report of command for single server, how can i send report for all the servers for every 10 min?
use strict;
use warnings;
use Net::SSH::Perl;
use Filesys::DiskSpace;
# i have almost more than 100 servers..
my %hosts = (
'localhost' => {
user => "z",
password => "qumquat",
},
'129.221.63.205' => {
user => "z",
password => "aardvark",
},
'129.221.63.205' => {
user => "z",
password => "aardvark",
},
);
# file system /home or /dev/sda5
my $dir = "/home";
my $cmd = "df $dir";
foreach my $host (keys %hosts) {
my $ssh = Net::SSH::Perl->new($host,port => 22,debug => 1,protocol => 2,1 );
$ssh->login($hostdata{$host}{user},$hostdata{$host}{password} );
my ($out) = $ssh->cmd($cmd});
print "$out\n";
}
It has to send output of disk space for each server
Is there a reason this needs to be done in Perl? There is an existing tool, dsh, which provides precisely this functionality of using ssh to run a shell command on multiple hosts and report the output from each. It also has the ability, with the -c (concurrent) switch to run the command at the same time on all hosts rather than waiting for each one to complete before going on to the next, which you would need if you want to monitor 150 machines every 10 minutes, but it takes 1 minute to check each host.
To use dsh, first create a file in ~/.dsh/group/ containing a list of your servers. I'll put mine in ~/.dsh/group/test-group with the content:
galera-1
galera-2
galera-3
Then I can run the command
dsh -g test-group -c 'df -h /'
And get back the result:
galera-3: Filesystem Size Used Avail Use% Mounted on
galera-3: /dev/mapper/debian-system 140G 36G 99G 27% /
galera-1: Filesystem Size Used Avail Use% Mounted on
galera-1: /dev/mapper/debian-system 140G 29G 106G 22% /
galera-2: Filesystem Size Used Avail Use% Mounted on
galera-2: /dev/mapper/debian-system 140G 26G 109G 20% /
(They're out-of-order because I used -c, so the command was sent to all three servers at once and the results were printed in the order the responses were received. Without -c, they would appear in the same order the servers are listed in the group file, but then it would wait for each reponse before connecting to the next server.)
But, really, with the talk of repeating this check every 10 minutes, it sounds like what you really want is a proper monitoring system such as Icinga (a high-performance fork of the better-known Nagios), rather than just a way to run commands remotely on multiple machines (which is what dsh provides). Unfortunately, configuring an Icinga monitoring system is too involved for me to provide an example here, but I can tell you that monitoring disk space is one of the checks that are included and enabled by default when using it.
There is a ready-made tool called Ansible for exactly this purpose. There you can define your list of servers, group then and execute commands on all of them.

GDB Server and GDB Client who sends the "+" first?

GDB Client:
NetworkClientConnect 503: Attempting host: 10.23.37.155 (addr: 02CE4B50)
NetworkClientConnect 518: Connected to host: 10.23.37.155
NetworkClientRecv 576: Recv Packet: +
NetworkClientSend 550: Sent Packet: +
GDB Server:
Debug: 243 275 pld.c:207 handle_pld_init_command(): Initializing PLDs...
Info : 244 22937 server.c:83 add_connection(): accepting 'gdb' connection from 3333
Debug: 247 22954 gdb_server.c:260 gdb_get_char_inner(): received '+'
Debug: 248 22954 gdb_server.c:272 gdb_get_char_inner(): returned char '+' (0x2b)
Initially the connections are made then they acknowledge that they got the packet by sending "+". In my case the client says it is receiving a '+' and so does the server as the very first info exchange. That does not make sense. One has to send and the other receive what I see is both receiving and sending in parallel. But it is working. So where is my thinking wrong? Also if you can point me to a URL which shows exactly the GDB Server and Client protocol exchange that would be awesome.
In your GDB client printout, it looks to me, messages are not printed in order (see that Recv packet has number 576, and sent 550).
Use wireshark or similar tool to debug an issue like this.
I tried connecting to gdbserver via loopback and according to wireshark the dialogue looks like this:
client sends "+"
client sends "$qSupported:multiprocess+;xmlRegisters=i386;qRelocInsn+#b5"
server sends "+"
server sends "$PacketSize=3fff;QPassSignals+;..."
and so on.
Gdb does help an option selectable at runtime that can help debug such things. Start it, then issue "set debug remote 1". Same on remote side. Start gdbserver by "gdbserver --remote-debug ...". This will print remote gdb protocol dialogue on both sides.
Another, possibly best if most time consuming options is to check the gdb&gdbserver source.
I got into WireShark Help Forum (http://ask.wireshark.org/) and posed the question there. "How to capture packets between 2 IP's". There a person called Quadratic gave a brilliant answer. You can refer the WireShark site or here it is. It works like a charm!!
Do this:
• When you first start Wireshark, click on the button in the far upper-left that says "List the available capture interfaces" when you scroll over it.
• In the new "Capture Interfaces" window that opens, select the interface you want to capture packets (with the check box on the left-hand side) and click"Options".
• In the Capture Options window, on the lower-left corner there should be a "Stop Capture Automatically After..." seciton. Check the "packets" option and put in a value of 50
• In the same Capture Options window, in the text box to the right of "Capture Filter", type the statement (without quotes) "ip host 10.xx.xx.xx and ip host 10.yy.yy.yy".
• Hit the Start button :)
One small thing to note - if the interface you're capturing is doing vlan tagging, replace the capture filter statement to "vlan and ip host 10.xx.xx.xx and ip host 10.yy.yy.yy" without quotes.
Edit:
An even simpler solution is to just use one command line statement:
C:\Program Files\Wireshark\dumpcap.exe -c 50 -i {interface name or number} -w {wherever you want to save the packet capture file}

Expect take first prompt after spawn

I want to make a script that it take as expect first promt after spawn "ssh user#server".
Am example:
I have 5 servers but all differs the initial promt and i can't modify:
[root#test home]#
root#server2:~$
User: root Server: server3 ~ !
You got the point.
This is how i think but i can't figure out how to get that
set timeout -1
spawn "ssh root#server"
expect "assword:"
send "password\n"
#
var=getpromt
#
expect "$var"
send "stuff\n"
expect eof
How can i get those promts to a expect script that can recognize that is the promt to follow?
I would just keep an array of regular expressions:
array set prompt_re {
test {#\s*$}
server2 {$\s*$}
server3 {!\s*$}
}
spawn ssh $user#$host
expect assword:
send "$password\r"
expect -re $prompt_re($host)
Or, you could mash those up into a single regex
expect -re {[#$!]\s*$} ;# expect the prompt.
Try this (pseudo code):
echo commands-to-be-executed-on-ssh | expect-script
& your expect-script would looks something like:
set timeout -1
spawn "ssh root#server"
expect "assword:"
send "password\n"
interact # <~~~~~~~~~~~ At this point, expect would pass the redirected/piped stdin to ssh process.
Note: I haven't tested this. So apologies for any syntax errors :)

boost::asio::async_read_until - Does not read until four messages have been sent

I'm trying to learn boost::asio by writing a simple client which sends strings to an echo server. I have tested the echo server with telnet and it works great, but my boost::asio client is acting weird. async_read_until doesn't seem to read/call handler until four messages have been sent (and returned by the echo server). The output of the client maybe explain this better (I removed the newline after each value):
gurka#x:~/private/code/test$ ./test localhost 2001
Hostname resolved.
Connected to server.
Starting write
Starting read_until
Writting[1]
Writting[2]
Writting[3]
Writting[4]
Read[1]
Writting[5]
Read[2]
Writting[6]
When the connection have been made I have two calls:
boost::asio::async_write(mSocket, mOutgoingBuffer, boost::bind(&Connection::writeToServer, this, boost::asio::placeholders::error));
boost::asio::async_read_until(mSocket, mIncomingBuffer, "\n", boost::bind(&Connection::readFromServer, this, boost::asio::placeholders::error));
writeToServer and readFromServer just prints Writting/Read and the value it's writting/read and then does the async_write/async_read_until call again, with exacly same parameters. The writeToServer takes it messages to send from a queue which I have filled with "1\n".."6\n".
I don't think the error is in the echo server since I can see that it read and writes back all 6 values, in order. And it as I said before, it works perfect using telnet. So, why is async_read_until "delayed" by 4 messages? I've tried sending longer strings and it's the same thing.