How do I set the column width of a pexpect ssh session? - python-2.7

I am writing a simple python script to connect to a SAN via SSH, run a set of commands. Ultimately each command will be logged to a separate log along with a timestamp, and then exit. This is because the device we are connecting to doesn't support certificate ssh connections, and doesn't have decent logging capabilities on its current firmware revision.
The issue that I seem to be running into is that the SSH session that is created seems to be limited to 78 characters wide. The results generated from each command are significantly wider - 155 characters. This is causing a bunch of funkiness.
First, the results in their current state are significantly more difficult to parse. Second, because the buffer is significantly smaller, the final volume command won't execute properly because the pexpect launched SSH session actually gets prompted to "press any key to continue".
How do I change the column width of the pexpect session?
Here is the current code (it works but is incomplete):
#!/usr/bin/python
import pexpect
import os
PASS='mypassword'
HOST='1.2.3.4'
LOGIN_COMMAND='ssh manage#'+HOST
CTL_COMMAND='show controller-statistics'
VDISK_COMMAND='show vdisk-statistics'
VOL_COMMAND='show volume-statistics'
VDISK_LOG='vdisk.log'
VOLUME_LOG='volume.log'
CONTROLLER_LOG='volume.log'
DATE=os.system('date +%Y%m%d%H%M%S')
child=pexpect.spawn(LOGIN_COMMAND)
child.setecho(True)
child.logfile = open('FetchSan.log','w+')
child.expect('Password: ')
child.sendline(PASS)
child.expect('# ')
child.sendline(CTL_COMMAND)
print child.before
child.expect('# ')
child.sendline(VDISK_COMMAND)
print child.before
child.expect('# ')
print "Sending "+VOL_COMMAND
child.sendline(VOL_COMMAND)
print child.before
child.expect('# ')
child.sendline('exit')
child.expect(pexpect.EOF)
print child.before
The output expected:
# show controller-statistics
Durable ID CPU Load Power On Time (Secs) Bytes per second IOPS Number of Reads Number of Writes Data Read Data Written
---------------------------------------------------------------------------------------------------------------------------------------------------------
controller_A 0 45963169 1573.3KB 67 386769785 514179976 6687.8GB 5750.6GB
controller_B 20 45963088 4627.4KB 421 3208370173 587661282 63.9TB 5211.2GB
---------------------------------------------------------------------------------------------------------------------------------------------------------
Success: Command completed successfully.
# show vdisk-statistics
Name Serial Number Bytes per second IOPS Number of Reads Number of Writes Data Read Data Written
------------------------------------------------------------------------------------------------------------------------------------------------
CRS 00c0ff13349e000006d5c44f00000000 0B 0 45861 26756 3233.0MB 106.2MB
DATA 00c0ff1311f300006dd7c44f00000000 2282.4KB 164 23229435 76509765 5506.7GB 1605.3GB
DATA1 00c0ff1311f3000087d8c44f00000000 2286.5KB 167 23490851 78314374 5519.0GB 1603.8GB
DATA2 00c0ff1311f30000c2f8ce5700000000 0B 0 26 4 1446.9KB 65.5KB
FRA 00c0ff13349e000001d8c44f00000000 654.8KB 5 3049980 15317236 1187.3GB 1942.1GB
FRA1 00c0ff13349e000007d9c44f00000000 778.7KB 6 3016569 15234734 1179.3GB 1940.4GB
------------------------------------------------------------------------------------------------------------------------------------------------
Success: Command completed successfully.
# show volume-statistics
Name Serial Number Bytes per second IOPS Number of Reads Number of Writes Data Read Data Written
-----------------------------------------------------------------------------------------------------------------------------------------------------
CRS_v001 00c0ff13349e0000fdd6c44f01000000 14.8KB 5 239611146 107147564 1321.1GB 110.5GB
DATA1_v001 00c0ff1311f30000d0d8c44f01000000 2402.8KB 218 1701488316 336678620 33.9TB 3184.6GB
DATA2_v001 00c0ff1311f3000040f9ce5701000000 0B 0 921 15 2273.7KB 2114.0KB
DATA_v001 00c0ff1311f30000bdd7c44f01000000 2303.4KB 209 1506883611 250984824 30.0TB 2026.6GB
FRA1_v001 00c0ff13349e00001ed9c44f01000000 709.1KB 28 25123082 161710495 1891.0GB 2230.0GB
FRA_v001 00c0ff13349e00001fd8c44f01000000 793.0KB 34 122052720 245322281 3475.7GB 3410.0GB
-----------------------------------------------------------------------------------------------------------------------------------------------------
Success: Command completed successfully.
The output as printed to the terminal (as mentioned, the 3rd command won't execute in its current state):
show controller-statistics
Durable ID CPU Load Power On Time (Secs) Bytes per second
IOPS Number of Reads Number of Writes Data Read
Data Written
----------------------------------------------------------------------
controller_A 3 45962495 3803.1KB
73 386765821 514137947 6687.8GB
5748.9GB
controller_B 20 45962413 5000.7KB
415 3208317860 587434274 63.9TB
5208.8GB
----------------------------------------------------------------------
Success: Command completed successfully.
Sending show volume-statistics
show vdisk-statistics
Name Serial Number Bytes per second IOPS
Number of Reads Number of Writes Data Read Data Written
----------------------------------------------------------------------------
CRS 00c0ff13349e000006d5c44f00000000 0B 0
45861 26756 3233.0MB 106.2MB
DATA 00c0ff1311f300006dd7c44f00000000 2187.2KB 152
23220764 76411017 5506.3GB 1604.1GB
DATA1 00c0ff1311f3000087d8c44f00000000 2295.2KB 154
23481442 78215540 5518.5GB 1602.6GB
DATA2 00c0ff1311f30000c2f8ce5700000000 0B 0
26 4 1446.9KB 65.5KB
FRA 00c0ff13349e000001d8c44f00000000 1829.3KB 14
3049951 15310681 1187.3GB 1941.2GB
FRA1 00c0ff13349e000007d9c44f00000000 1872.8KB 14
3016521 15228157 1179.3GB 1939.5GB
----------------------------------------------------------------------------
Success: Command completed successfully.
Traceback (most recent call last):
File "./fetchSAN.py", line 34, in <module>
child.expect('# ')
File "/Library/Python/2.7/site-packages/pexpect-4.2.1-py2.7.egg/pexpect/spawnbase.py", line 321, in expect
timeout, searchwindowsize, async)
File "/Library/Python/2.7/site-packages/pexpect-4.2.1-py2.7.egg/pexpect/spawnbase.py", line 345, in expect_list
return exp.expect_loop(timeout)
File "/Library/Python/2.7/site-packages/pexpect-4.2.1-py2.7.egg/pexpect/expect.py", line 107, in expect_loop
return self.timeout(e)
File "/Library/Python/2.7/site-packages/pexpect-4.2.1-py2.7.egg/pexpect/expect.py", line 70, in timeout
raise TIMEOUT(msg)
pexpect.exceptions.TIMEOUT: Timeout exceeded.
<pexpect.pty_spawn.spawn object at 0x105333910>
command: /usr/bin/ssh
args: ['/usr/bin/ssh', 'manage#10.254.27.49']
buffer (last 100 chars): '-------------------------------------------------------------\r\nPress any key to continue (Q to quit)'
before (last 100 chars): '-------------------------------------------------------------\r\nPress any key to continue (Q to quit)'
after: <class 'pexpect.exceptions.TIMEOUT'>
match: None
match_index: None
exitstatus: None
flag_eof: False
pid: 19519
child_fd: 5
closed: False
timeout: 30
delimiter: <class 'pexpect.exceptions.EOF'>
logfile: <open file 'FetchSan.log', mode 'w+' at 0x1053321e0>
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0.05
delayafterclose: 0.1
delayafterterminate: 0.1
searcher: searcher_re:
0: re.compile("# ")
And here is what is captured in the log:
Password: mypassword
HP StorageWorks MSA Storage P2000 G3 FC
System Name: Uninitialized Name
System Location:Uninitialized Location
Version:TS230P008
# show controller-statistics
show controller-statistics
Durable ID CPU Load Power On Time (Secs) Bytes per second
IOPS Number of Reads Number of Writes Data Read
Data Written
----------------------------------------------------------------------
controller_A 3 45962495 3803.1KB
73 386765821 514137947 6687.8GB
5748.9GB
controller_B 20 45962413 5000.7KB
415 3208317860 587434274 63.9TB
5208.8GB
----------------------------------------------------------------------
Success: Command completed successfully.
# show vdisk-statistics
show vdisk-statistics
Name Serial Number Bytes per second IOPS
Number of Reads Number of Writes Data Read Data Written
----------------------------------------------------------------------------
CRS 00c0ff13349e000006d5c44f00000000 0B 0
45861 26756 3233.0MB 106.2MB
DATA 00c0ff1311f300006dd7c44f00000000 2187.2KB 152
23220764 76411017 5506.3GB 1604.1GB
DATA1 00c0ff1311f3000087d8c44f00000000 2295.2KB 154
23481442 78215540 5518.5GB 1602.6GB
DATA2 00c0ff1311f30000c2f8ce5700000000 0B 0
26 4 1446.9KB 65.5KB
FRA 00c0ff13349e000001d8c44f00000000 1829.3KB 14
3049951 15310681 1187.3GB 1941.2GB
FRA1 00c0ff13349e000007d9c44f00000000 1872.8KB 14
3016521 15228157 1179.3GB 1939.5GB
----------------------------------------------------------------------------
Success: Command completed successfully.
# show volume-statistics
show volume-statistics
Name Serial Number Bytes per second
IOPS Number of Reads Number of Writes Data Read
Data Written
----------------------------------------------------------------------
CRS_v001 00c0ff13349e0000fdd6c44f01000000 11.7KB
5 239609039 107145979 1321.0GB
110.5GB
DATA1_v001 00c0ff1311f30000d0d8c44f01000000 2604.5KB
209 1701459941 336563041 33.9TB
3183.3GB
DATA2_v001 00c0ff1311f3000040f9ce5701000000 0B
0 921 15 2273.7KB
2114.0KB
DATA_v001 00c0ff1311f30000bdd7c44f01000000 2382.8KB
194 1506859273 250871273 30.0TB
2025.4GB
FRA1_v001 00c0ff13349e00001ed9c44f01000000 1923.5KB
31 25123006 161690520 1891.0GB
2229.1GB
FRA_v001 00c0ff13349e00001fd8c44f01000000 2008.5KB
37 122050872 245301514 3475.7GB
3409.1GB
----------------------------------------------------------------------
Press any key to continue (Q to quit)%

As a starting point: According to the manual, that SAN has a command to disable the pager. See the documentation for set cli-parameters pager off. It may be sufficient to execute that command. It may also have a command to set the terminal rows and columns that it uses for formatting output, although I wasn't able to find one.
Getting to your question: When an ssh client connects to a server and requests an interactive session, it can optionally request a PTY (pseudo-tty) for the server side of the session. When it does that, it informs the server of the lines, columns, and terminal type which the server should use for the TTY. Your SAN may honor PTY requests and use the lines and columns values to format its output. Or it may not.
The ssh client gets the rows and columns for the PTY request from the TTY for its standard input. This is the PTY which pexpect is using to communicate with ssh.
this question discusses how to set the terminal size for a pexpect session. Ssh doesn't honor the LINES or COLUMNS environment variables as far as I can tell, so I doubt that would work. However, calling child.setwinsize() after spawning ssh ought to work:
child = pexpect.spawn(cmd)
child.setwinsize(400,400)
If you have trouble with this, you could try setting the terminal size by invoking stty locally before ssh:
child=pexpect.spawn('stty rows x cols y; ssh user#host')
Finally, you need to make sure that ssh actually requests a PTY for the session. It does this by default in some cases, which should include the way you are running it. But it has a command-line option -tt to force it to allocate a PTY. You could add that option to the ssh command line to make sure:
child=pexpect.spawn('ssh -tt user#host')
or
child=pexpect.spawn('stty rows x cols y; ssh -tt user#host')

Related

Rocket Universe hung deleting multipart file

I'm having a process hang on trying to delete a part of a multipart file. There's a lock on the file, but the process trying to do the delete is the one that holds the lock. What could be causing it to hang?
Our product uses a multipart file called MR.WORK. A new part gets created for each process, with a part name consisting of the letter U and the userno, so here it's MR.WORK,U-3. Let's say I'm logged in as foo, and the product is also logged in as foo, running in a phantom.
>PORT.STATUS USER foo
There are currently 2 uniVerse sessions; 1 interactive, 1 phantom
Pid.... User name. Who. Port name..... Last command processed............
23144 foo 2 /dev/pts/2 PORT.STATUS USER foo
Pid.... User name. Who. Last command processed............................
2086 foo -3 DELETE-FILE DATA MR.WORK,U-3
It get there and just hangs forever. Something else has an IN type group lock on the same inode, but I gather that IN is just informational and not really locking.
>LIST.READU EVERY
Active Group Locks: Record Group Group Group
Device.... Inode.... Netnode Userno Lmode G-Address. Locks ...RD ...SH ...EX
2068 21630372 0 2 9 IN 400 1 0 0 0
2068 21502283 0 -1 57 RD 400 0 1 0 0
Active Record Locks:
Device.... Inode.... Netnode Userno Lmode Pid Login Id Item-ID.............
2068 21630372 0 -3 9 RU 2086 foo MR.WORK,U-3
I'm stumped. MR.WORK,U-3 part is not particularly large. I've tried deleting and recreating the file and we'll see if that helps, but I'm not hopeful. Ideas?

Cachegring file very small

I am new to profiling. I am trying to profile my PHP with xdebug.
The cachegrind file is created but has no significant content
I have set xdebug.profiler_enable_trigger = 1
xdebug.profiler_output_name = cachegrind+%p+%H+%R.cg
I call my page with additional GET parameter ?XDEBUG_PROFILE=1
My cachegrind file is generated but has no significant content
Here is my output:
version: 1
creator: xdebug 2.7.0alpha1 (PHP 7.0.30-dev)
cmd: C:\WPNserver\www\DMResources\Classes\VendorClasses\PHPMySQLiDatabase\MysqliDb.php
part: 1
positions: line
events: Time Memory
fl=(1)
fn=(221) php::mysqli->close
1244 103 -14832
fl=(42)
fn=(222) MysqliDbExt->__destruct
1239 56 0
cfl=(1)
cfn=(221)
calls=1 0 0
1244 103 -14832
That's it - I must be missing something fundamental.
I think you hit this bug in xdebug.
As suggested by Derick in the issue tracker, you can workaround this by adding %r to the profiler output name. eg: xdebug.profiler_output_name = cachegrind+%p+%H+%R+%r.cg
(with %r adding a random number to the name)

Parse output from k6 data to get specific information

I am trying to extract data from a k6 output (https://docs.k6.io/docs/results-output):
data_received.........: 246 kB 21 kB/s
data_sent.............: 174 kB 15 kB/s
http_req_blocked......: avg=26.24ms min=0s med=13.5ms max=145.27ms p(90)=61.04ms p(95)=70.04ms
http_req_connecting...: avg=23.96ms min=0s med=12ms max=145.27ms p(90)=57.03ms p(95)=66.04ms
http_req_duration.....: avg=197.41ms min=70.32ms med=91.56ms max=619.44ms p(90)=288.2ms p(95)=326.23ms
http_req_receiving....: avg=141.82µs min=0s med=0s max=1ms p(90)=1ms p(95)=1ms
http_req_sending......: avg=8.15ms min=0s med=0s max=334.23ms p(90)=1ms p(95)=1ms
http_req_waiting......: avg=189.12ms min=70.04ms med=91.06ms max=343.42ms p(90)=282.2ms p(95)=309.22ms
http_reqs.............: 190 16.054553/s
iterations............: 5 0.422488/s
vus...................: 200 min=200 max=200
vus_max...............: 200 min=200 max=200
The data comes in the above format and I am trying to find a way to get each line in the above along with the values only. As an example:
http_req_duration: 197.41ms, 70.32ms,91.56ms, 619.44ms, 288.2ms, 326.23ms
I have to do this for ~50-100 files and want to find a RegEx or similar quicker way to do it, without writing too much code. Is it possible?
Here's a simple Python solution:
import re
FIELD = re.compile(r"(\w+)\.*:(.*)", re.DOTALL) # split the line to name:value
VALUES = re.compile(r"(?<==).*?(?=\s|$)") # match individual values from http_req_* fields
# open the input file `k6_input.log` for reading, and k6_parsed.log` for parsing
with open("k6_input.log", "r") as f_in, open("k6_parsed.log", "w") as f_out:
for line in f_in: # read the input file line by line
field = FIELD.match(line) # first match all <field_name>...:<values> fields
if field:
name = field.group(1) # get the field name from the first capture group
f_out.write(name + ": ") # write the field name to the output file
value = field.group(2) # get the field value from the second capture group
if name[:9] == "http_req_": # parse out only http_req_* fields
f_out.write(", ".join(VALUES.findall(value)) + "\n") # extract the values
else: # verbatim copy of other fields
f_out.write(value)
else: # encountered unrecognizable field, just copy the line
f_out.write(line)
For a file with contents as above you'll get a resulting:
data_received: 246 kB 21 kB/s
data_sent: 174 kB 15 kB/s
http_req_blocked: 26.24ms, 0s, 13.5ms, 145.27ms, 61.04ms, 70.04ms
http_req_connecting: 23.96ms, 0s, 12ms, 145.27ms, 57.03ms, 66.04ms
http_req_duration: 197.41ms, 70.32ms, 91.56ms, 619.44ms, 288.2ms, 326.23ms
http_req_receiving: 141.82µs, 0s, 0s, 1ms, 1ms, 1ms
http_req_sending: 8.15ms, 0s, 0s, 334.23ms, 1ms, 1ms
http_req_waiting: 189.12ms, 70.04ms, 91.06ms, 343.42ms, 282.2ms, 309.22ms
http_reqs: 190 16.054553/s
iterations: 5 0.422488/s
vus: 200 min=200 max=200
vus_max: 200 min=200 max=200
If you have to run it over many files, I'd suggest you to investigate os.glob(), os.walk() or os.listdir() to list all the files you need and then loop over them and execute the above, thus further automating the process.

Aerospike losing documents when node goes down

I've been doing dome tests using aerospike and I noticed a behavior different than what is sold.
I have a cluster of 4 nodes running on AWS in the same AZ, the instances are t2micro (1cpu, 1gb RAM, 25gb SSD) using the aws linux with the AMI aerospike
aerospike.conf:
heartbeat {
mode mesh
port 3002
mesh-seed-address-port XXX.XX.XXX.164 3002
mesh-seed-address-port XXX.XX.XXX.167 3002
mesh-seed-address-port XXX.XX.XXX.165 3002
#internal aws IPs
...
namespace teste2 {
replication-factor 2
memory-size 650M
default-ttl 365d
storage-engine device {
file /opt/aerospike/data/bar.dat
filesize 22G
data-in-memory false
}
}
What I did was a test to see if I would loose documents when a node goes down. For that I wrote a little code on python:
from __future__ import print_function
import aerospike
import pandas as pd
import numpy as np
import time
import sys
config = {
'hosts': [ ('XX.XX.XX.XX', 3000),('XX.XX.XX.XX',3000),
('XX.XX.XX.XX',3000), ('XX.XX.XX.XX',3000)]
} # external aws ips
client = aerospike.client(config).connect()
for i in range(1,10000):
key = ('teste2', 'setTest3', ''.join(('p',str(i))))
try:
client.put(key, {'id11': i})
print(i)
except Exception as e:
print("error: {0}".format(e), file=sys.stderr)
time.sleep(1)
I used this code just for inserting a sequence of integers that I could check after that. I ran that code and after a few seconds I stopped the aerospike service at one node for 10 seconds, using sudo service aerospike stop and sudo service aerospike colstart to restart.
I waited for a few seconds until the nodes did all the migration and executed the following python script:
query = client.query('teste2', 'setTest3')
query.select('id11')
te = []
def save_result((key, metadata, record)):
te.append(record)
query.foreach(save_result)
d = pd.DataFrame(te)
d2 = d.sort(columns='id11')
te2 = np.array(d2.id11)
for i in range(0,len(te2)):
if i > 0:
if (te2[i] != (te2[i-1]+1) ):
print('no %d'% int(te2[i-1]+1))
print(te2)
And got as response:
no 3
no 6
no 8
no 11
no 13
no 17
no 20
no 22
no 24
no 26
no 30
no 34
no 39
no 41
no 48
no 53
[ 1 2 5 7 10 12 16 19 21 23 25 27 28 29 33 35 36 37 38 40 43 44 45 46 47 51 52 54]
Is my cluster configured wrong or this is normal?
ps: I tried to include as many things I could, if you please suggest more information to include I will appreciate.
Actually I found a solution, and it is pretty simple and foolish to be honest.
In the configuration file we have some parameters for network communication between nodes, such as:
interval 150 # Number of milliseconds between heartbeats
timeout 10 # Number of heartbeat intervals to wait
# before timing out a node
This two parameters set the time it takes to the cluster to realize the node is down and out of the cluster. (in this case 1.5 sec).
What we found useful was to tune the write policies at the client to work along this parameters.
Depending on the client you will have some policies like number of tries until the operation fails, timeout for the operation, time between tries.
You just need to adapt the client parameters. For example: set the number of retries to 4 (each is executed after 500 ms) and the timeout to 2 sec. Doing that the client will recognize the node is down and redirect the operation to another node.
This setup can be overwhelming on the cluster, generating a huge overload, but it worked for us.

Parse ASCII Output of a Device-File in C++

i wrote a kernelspace driver for a USB-device. If it is connected it mounts to /dev/myusbdev0 for example.
Via command line with echo -en "command" > /dev/myusbdev0 i can send commands to the device and read results with cat /dev/myusbdev0.
Ok, now i have to write a C++ program. At first i would open the device file for read/write with:
int fd = open("/dev/echo", O_RDRW);
After that a cmd will be send to get the device working:
char cmd[] = { "\x02sEN LMDscandata 1\x03" };
write(fd, cmd, sizeof(cmd));
Now i get to the part i dont now how to handle yet. i now need to read from the device, as its keeping on sending me data continously. this data i need to read and parse now ...
char buf[512];
read(fd, buf, sizeof(buf);
The data looks like following, each one starts with \x02 and ends with \x03, they are not always the same size:
sRA LMDscandata 1 1 89A27F 0 0 343 347 27477BA9 2747813B 0 0 7 0 0
1388 168 0 1 DIST1 3F800000 00000000 186A0 1388 15 8A1 8A5 8AB 8AC 8A6
8AC 8B6 8C8 8C2 8C9 8CB 8C4 8E4 8E1 8EB 8E0 8F5 908 8FC 907 906 0 0 0
0 0 0 All Values are separated with a 20hex {SPC}
it think i need some kind of while loop to continiously read the data from an \x02 until i read a \x03.
if i have a complete scan, i need to parse this ascii message in its seperate parts (some variables uint_16, uint_8, enum_16, ...).
any idea how i can read a complete scan into a buf[] and then parse its components out?
As you say the device is sending continiously, i would recommend adding a queue to hold the chunks coming in, and some dispatching that takes out parts of the queue, i.e. x02 to x03, decoupling the work that is done from receiving chunks.
Furthermore you can have then single objects handling one complete block from x02 to x03, perhaps threaded (makes sense with the information given).
device => chunk reader => input queue => inputer reader => data handling
hope this helps