Determining wait time while executing a binary on QNX prompt using telnet - python-2.7

I am processing certain output binary files using sloginfo on QNX, I have used ftp/telnet/vmware to get to a point where I upload the binary from my machine to the vmware instance and then run the sloginfo command.
The issue is that binary files which need to be processed are of inconsistent size (ranging from 50mb to 200mb), and the time needed to process each of these files is different, thus making it impossible to determine the wait/sleep time required.
I need to know if sloginfo returns a value which can be used as a flag. I tried using tn.read_until(), without getting desired results.
#
import os, sys, telnetlib, time
from ftplib import FTP
def upload(ftp, filed):
ext = os.path.splitext(filed)[1]
if ext in (".txt", ".htm", ".html"):
ftp.storlines("STOR " + filed, open(filed))
else:
ftp.storbinary("STOR " + filed, open(filed, "rb"), 1024)
def gettext(ftp, filename, outfile=None):
# fetch a text file
if outfile is None:
outfile = sys.stdout
# use a lambda to add newlines to the lines read from the server
ftp.retrlines("RETR " + filename, lambda s, w=outfile.write: w(s+"\n"))
if __name__ == '__main__':
dbfile = "LOG1"
nonpassive = False
remotesite = '192.168.0.128'
ftp_port = '21'
tel_port = '23'
password = 'root'
ftp = FTP()
ftp.connect(remotesite, ftp_port)
ftp.login('root','root')
print 'Uploading the Log file... Please wait...'
upload (ftp, dbfile)
print 'File Uploaded Successfully...'
tn = telnetlib.Telnet(remotesite, tel_port)
tn.read_until("login: ")
tn.write('root' + "\n")
if password:
tn.write(password + "\n")
tn.write("sloginfo LOG1 >> LOG1.txt\n")
**#need to get more control on this sleep time**
time.sleep(300)
print 'Downloading text file...'
gettext(ftp, "LOG1.txt", open(r'LOG1.txt','wb'))
ftp.close()
tn.close()

tn.write("sloginfo LOG1 >> LOG1.txt\n") modified the above comment with tn.write ('sloginfo '+ strdbfile + '>> ' + strdbfiletxt+ '; echo Done!\n') and this has resolved the issue

Related

How do I use the same data to create multiple files?

I am trying to create two files with the same data. One file to use for updating live web data and the other as a log. One file needs to be appended to and updated frequently. I can create the log fine but am struggling on how to handle the data for the second file.
I have tried using a 'with open' statement for the log file. When I try reading this into a live web page, it shows me the data that has been logged previously, and updates the data only when the file is closed.
#!/usr/bin/env python2.7
import os
import RPi.GPIO as GPIO
import time
import subprocess
#Solar Panel Script 1.0
#Set pin for Pump Relay Signal (PR = pin 29)
#Set up Pump Relay BCM5 (pin 29) as output pin in off position
GPIO.setmode(GPIO.BCM)
GPIO.setup (5, GPIO.OUT, initial=0)
GPIO.setwarnings(False)
#Load Hot Water Tank (HWT), Solar Panel (SP), and Outside Temp (OT) with OWFS
#Create CSV File for temperature data
from time import sleep, strftime, time
with open("/var/www/html/data.csv", "a") as log:
while True:
with open ("/mnt/1wire/28.C14777910F02/temperature", "r") as myfile:
HWT=myfile.read().replace('\n', '')
myfile.close()
with open ("/mnt/1wire/28.390877910402/temperature", "r") as myfile2:
SP=myfile2.read().replace('\n', '')
myfile.close()
log.write("{0},{1},{2}\n".format(strftime("%Y-%m-%d %H:%M:%S"), str(HWT), str(SP)))
#Solar Hot Water Heater Module
#Turns on PR only if SP is 10F hotter than HWT. Checks OT for frezing temps, if less than 33, PR is off.
print ('hot water: ' + HWT)
print ('solar panel: '+ SP)
flt_HWT = float(HWT)
flt_SP = float(SP)
if flt_HWT > 170:
GPIO.output(5, GPIO.LOW) #Pump Relay Off
if flt_SP > (flt_HWT + 10):
GPIO.output(5, GPIO.HIGH) #Pump Relay On
state = GPIO.input(5)
print state
sleep(20) #10 Minutes = 600
I expected the log file to allow me to collect data from it while it was open.
log.write("{0},{1},{2}\n".format(strftime("%Y-%m-%d %H:%M:%S"), str(HWT), str(SP)))
This is where you are writing the log. You can simply include another with open() statement here
with open("secondfile.log") as secfile:
log.write("{0},{1},{2}\n".format(strftime("%Y-%m-%d %H:%M:%S"), str(HWT), str(SP))) ##original log file can be here
secfile.write("{0},{1},{2}\n".format(strftime("%Y-%m-%d %H:%M:%S"), str(HWT), str(SP))) ##and here you are wrighting the second file.
However if you are wrighting multiple files it would be better to stick them into a function of their own.
def write_file(text, filename):
try:
with open(filename) as file:
file.write(text)
return True
except:
return False ##include any other exception stuff here
now you can use
success = write_file("log text", "filename.log")
if success:
success = write_file("log2 text", "filename2.log")
if success:
print("Yey both files have been written to")
else:
print("Awww, there was an error writing to the file")

Tensorboard Error - NameError: name 'tensorboard' is not defined

I am now learning tensorflow but am unable to get tensorboard to work. I tried the simple program below with no luck. The program works before I use the tensorboard code but when I use the tensorboard code I get the following error:
NameError: name 'tensorboard' is not defined
Please any assistance is apppreciated.
import tensorflow as tf
a = tf.constant(5, name="input_a")
b = tf.constant(3, name="input_a")
c = tf.multiply(a,b, name="mul_c")
d = tf.add(a,b, name="add_d")
e = tf.add(c,d, name="add_e")
sess = tf.Session()
sess.run(e)
output = sess.run(e)
writer = tf.summary.FileWriter('/tmp/newtest', graph=sess.graph)
print(sess.run(e))
tensorboard --logdir /tmp/newtest
I believe this is already 'answered', but, to give a sample of what I did, regarding this, and I hope it helps you or others.
This is just covering ending overhead of triggering & showing tensorboard.
import subprocess
import webbrowser
import time
logLocation = 'tflearn_logs'
print("\r\nWould you like to see the visual results (y/N)? ", end='', flush=True)
answer = input()
if answer.strip().lower() == "y":
port = str(8018)
print("Starting Tensorboard to visualize... ")
process = subprocess.Popen(['tensorboard', "--logdir='" + logLocation + "'", '--port=' + port])
# Wait for a few seconds, give the tensorboard a headstart
time.sleep(5)
print("Opening Tensorboard webpage... ")
url = 'http://127.0.0.1:' + port + '/'
# Path differs per OS (Windows, Linux, iOS)
chrome_path = 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s'
webbrowser.get(chrome_path).open(url)
print("Press enter to quit... ", end='', flush=True)
answer = input()
if process is not None:
process.kill()

How can I use nessrest api (python) to export nessus scan reports in xml?

I am trying to automate the running of and downloading nessus scans using python. I have been using the nessrest api for python, and am able to successfully run a scan, but am not being successfully download the report in nessus format.
Any ideas how I can do this? I have been using the module scan_download, but that actually executes before my scan even finishes.
Thanks for the help in advance!
Just looking back at this question, heres an example of using Nessrest API to pull down CSV report exports from you nessus host,
#!/usr/bin/python2.7
import sys
import os
import io
from nessrest import ness6rest
file_format = 'csv' # options: nessus, csv, db, html
dbpasswd = ''
scan = ness6rest.Scanner(url="https://nessus:8834", login="admin", password="P#ssword123", insecure=True)
scan.action(action='scans', method='get')
folders = scan.res['folders']
scans = scan.res['scans']
if scan:
scan.action(action='scans', method='get')
folders = scan.res['folders']
scans = scan.res['scans']
for f in folders:
if not os.path.exists(f['name']):
if not f['type'] == 'trash':
os.mkdir(f['name'])
for s in scans:
scan.scan_name = s['name']
scan.scan_id = s['id']
folder_name = next(f['name'] for f in folders if f['id'] == s['folder_id'])
folder_type = next(f['type'] for f in folders if f['id'] == s['folder_id'])
# skip trash items
if folder_type == 'trash':
continue
if s['status'] == 'completed':
file_name = '%s_%s.%s' % (scan.scan_name, scan.scan_id, file_format)
file_name = file_name.replace('\\','_')
file_name = file_name.replace('/','_')
file_name = file_name.strip()
relative_path_name = folder_name + '/' + file_name
# PDF not yet supported
# python API wrapper nessrest returns the PDF as a string object instead of a byte object, making writing and correctly encoding the file a chore...
# other formats can be written out in text mode
file_modes = 'wb'
# DB is binary mode
#if args.format == "db":
# file_modes = 'wb'
with io.open(relative_path_name, file_modes) as fp:
if file_format != "db":
fp.write(scan.download_scan(export_format=file_format))
else:
fp.write(scan.download_scan(export_format=file_format, dbpasswd=dbpasswd))
can see more examples here,
https://github.com/tenable/nessrest/tree/master/scripts

Paramiko error: size mismatch in put

I am trying to copy few files from my local windows directory to remote linux dir.
It is working for file having same kind of extension. But breaks when there are different extensions in a folder.
The Code:
import os
import glob
import paramiko
glob_pattern='*.*'
try:
ssh.connect(host,username=user,password=pwd)
ftp = ssh.open_sftp()
try:
ftp.mkdir(dir_remote)
command=dir_remote+'/setuplog'
ftp.mkdir(command)
commande=dir_remote+'/emsfolder'
ftp.mkdir(commande)
try:
for fname in glob.glob(uploadfolder + os.sep + glob_pattern):
local_file = os.path.join(uploadfolder, fname)
remote_file = dir_remote + '/' + os.path.basename(local_file)
ftp.put(local_file,remote_file)
ftp.chmod(remote_file ,0777)
except IOError, e:
print (e)
except IOError, e:
print (e)
except paramiko.AuthenticationException, ae:
print (ae)
finally:
ssh.close()
I was trying to transfer 2 files only(1.sh and 2.pl). While 1.sh got copied a 0 byte 2.pl file is created at the remote server and then I get The Error:
size mismatch in put! 0 != 2200
I am using:
python 2.7, Paramiko - 1.15.2
Kindly help.
I doubt this has anything to do with different extensions in a folder. The code in paramiko's sftp_client.py:putfo() reads at the end:
s = self.stat(remotepath)
if s.st_size != size:
raise IOError('size mismatch in put! %d != %d' % (s.st_size, size))
I had a similar issue and it turned out that the remote filesystem was full and thus paramiko couldn't write/put the file.
BTW, instead of uploadfolder + os.sep + glob_pattern (and similar) you can use os.path.join(uploadfolder, glob_pattern)

issue in file write

I've been struggling with this for a few hours. I want to send a text file generated by Django to another server. For that I use scp and subprocess.call(). Everything goes well and I got a return_code == 0, but scp sends 0 bytes. The file created on the server side is empty.
I printed the exact command executed, the path is right, and if I put in in a shell it works perfectly.
Here is the code:
form = SubmitForm(request.POST or None)
context['form'] = form
if request.method == 'POST':
if form.is_valid():
# write file in ~/hipercic/apps/dcor/jobs/
params_file = open('apps/dcor/jobs/job_' + datetime.today().strftime("%Y%m%d_%H%M%S") + '_params.txt', 'wb')
for key, val in form.cleaned_data.iteritems():
params_file.write(str(val) + ' \n')
params_file.close
cmd = 'scp /home/guillet/hipercic/' + params_file.name + ' guillet#helios.public.stolaf.edu:'
context['cmd'] = cmd
return_code = subprocess.call(cmd, shell=True)
context['return_code'] = return_code
return render(request, 'base_pending.html', context)
I thought about a possible race condition, the file not having time to be completely written before being send, but nothing changes with a time.sleep(3).
Also, something really weird and the heart of the issue, if I tried to reopen and read the file right after closing it, the file is empty:
with open('/home/guillet/hipercic/' + params_file.name, 'rb') as f:
print f.read() # prints nothing!!
you have done params_file.close instead of params_file.close()
Closing the file properly will flush the data to the file you want to write to
It is good practice to use the with keyword when dealing with file objects. This has the advantage that the file is properly closed after its suite finishes.