Program for Backup - Python - python-2.7

Im trying to execute the following code in Python 2.7 on Windows7. The purpose of the code is to take back up from the specified folder to a specified folder as per the naming pattern given.
However, Im not able to get it work. The output has always been 'Backup Failed'.
Please advise on how I get resolve this to get the code working.
Thanks.
Code :
backup_ver1.py
import os
import time
import sys
sys.path.append('C:\Python27\GnuWin32\bin')
source = 'C:\New'
target_dir = 'E:\Backup'
target = target_dir + os.sep + time.strftime('%Y%m%d%H%M%S') + '.zip'
zip_command = "zip -qr {0} {1}".format(target,''.join(source))
print('This is a program for backing up files')
print(zip_command)
if os.system(zip_command)==0:
print('Successful backup to', target)
else:
print('Backup FAILED')

See if escaping the \'s helps :-
source = 'C:\\New'
target_dir = 'E:\\Backup'

Related

How to download a snappy.parquet file from s3 using Boto in Python

I'm new to this, and trying to download a snappy.parquet file from Amazon s3 I can later convert to CSV file.
I tried working with the following example I've found online, and I get an empty folder. can anyone please help me?
import boto
import sys, os
from boto.s3.key import Key
from boto.exception import S3ResponseError
DOWNLOAD_LOCATION_PATH =""
BUCKET_NAME = ""
AWS_ACCESS_KEY_ID= ""
AWS_ACCESS_SECRET_KEY = ""
conn = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_ACCESS_SECRET_KEY)
bucket = conn.get_bucket(BUCKET_NAME)
#goto through the list of files
bucket_list = bucket.list()
for l in bucket_list:
key_string = str(l.key)
s3_path = DOWNLOAD_LOCATION_PATH + key_string
try:
print ("Current File is ", s3_path)
l.get_contents_to_filename(s3_path)
except (OSError, S3ResponseError) as e:
pass
# check if the file has been downloaded locally
if not os.path.exists(s3_path):
try:
os.makedirs(s3_path)
except OSError as exc:
# let guard againts race conditions
import errno
if exc.errno != errno.EEXIST:
raise
The script you are using appears to recursively download the contents of the specified S3 bucket (BUCKET_NAME) to the specified local directory (DOWNLOAD_LOCATION_PATH). FWIW, I notice this script looks like it comes from here.
The "Current File is ..." output line should show you the progress of these files being written. One problem you might be having is due to this line:
s3_path = DOWNLOAD_LOCATION_PATH + key_string
If you had specified DOWNLOAD_LOCATION_PATH at the top as a directory without a trailing '/' character, e.g. like this:
DOWNLOAD_LOCATION_PATH = '/tmp/my_dir'
then the files being downloaded would be written not underneath the /tmp/my_dir directory, but directly in /tmp/ with a my_dir prefix on each filename! You can fix this by changing this line to:
s3_path = os.path.join(DOWNLOAD_LOCATION_PATH, key_string)
Other than that, the script appears to work alright. You may want to add this line at the very top:
from __future__ import print_function
if you are still using Python 2.x, otherwise the print output will look a bit odd (print will think you are printing a 2-Tuple).
Your question also makes it sound like you really only want/need to download a single file from the bucket -- if so, this isn't really a great script to be using, since it's downloading everything.

Running locally blastn against nt db thru python script

I have a fasta file with sequence that I want to blast locally to 'nt' database dowloaded on my computer from ncbi website
I dowloaded blast 2.6.0.
In order to access blast from anywhere, I did:
gedit ~/.bashrc
export PATH=/usr/local/ncbi-blast-2.6.0+/bin:$PATH
then I did:
source ~/.bashrc
Then I downloaded 'nt' database (155.6GB) and stored it in /usr/local/blastdb
I want to run in python script this command:
from Bio.Blast.Applications import NcbiblastnCommandline
cline = NcbiblastnCommandline(query="/home/proprietaire/Desktop/JADE/stage_scripts/seq_error_fasta.fasta", db="/usr/local/blastdb/nt", evalue=0.001, out="blast_result_local.xml", outfmt=5)
But it is not working for a reason. Please help me figure out what I'm doing wrong. Thank you for your help.
EDIT:
'seq_error_fasta.fasta' : is my fasta file with 64 sequences that I want to blast to 'nt' database.
My 'seq_error_fasta.fasta' contains sequence loaded with error like S, J, X so I want to blast them to 'nt' db in order to get the closest better sequence
I found out that I need to format the nt database dowloaded from ncbi so I did this:
makeblastdb -dbtype nucl -in nt
Then I added this after my cline variable in my python script:
stdout, stderr = cline()
The script is running but unfortunately I'm getting this error now:
Bus error (core dumped)
I think it's a ram memory problem so I thought that I need to shorten 'nt' db by taking only the bacteria sequence. I looked on NCBI for a whole bacteria only database but there is multiple database of different species like more then a thousand.
I also tried blast online using this script:
f = open('output_blast.xml','w')
for rec in SeqIO.parse(open("seq_error_fasta.fasta"), 'fasta):
result_handle = NCBIWWW.qblast("blastn", "nt", rec.format("fasta"), format_type="XML", alignments=1, perc_ident=95, expect= 0.001)
f.write(result_handle.read())
f.close()
but this only doing one query sequence and returning all hits, althought I specified 1 hit and 95% of identity.
This is driving me crazy lollll Please help
Download NCBI nr database:
$ mkdir db
$ cd db
$ wget ftp://ftp.ncbi.nlm.nih.gov/blast/db/nr*
Run blast:
import io
import shlex
import subprocess
BLAST_OUTFMT6 = """\
'6 qacc sacc pident length mismatch gapopen qstart qend sstart send evalue bitscore qseq sseq'\
"""
BLAST_OUTFMT6_COLUMN_NAMES = [
'query_id', 'subject_id', 'pc_identity', 'alignment_length', 'mismatches', 'gap_opens',
'q_start', 'q_end', 's_start', 's_end', 'evalue', 'bitscore', 'qseq', 'sseq',
]
def blastp(sequence, db, evalue=0.001, max_target_seqs=100000):
system_command = (
'blastp -db {db} -outfmt {outfmt} -evalue {evalue} -max_target_seqs {max_target_seqs}'
.format(db=db, outfmt=BLAST_OUTFMT6, evalue=evalue, max_target_seqs=max_target_seqs)
)
cp = subprocess.Popen(
shlex.split(system_command),
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
universal_newlines=True)
result, error_message = cp.communicate(sequence)
if error_message.strip():
print("Error: {}".format(error_message))
return result
if __name__ == '__main__':
result = blastp('AAAAAAAAAAAAAA', db='/path/to/db/nr')
print(result)

Tensorboard Error - NameError: name 'tensorboard' is not defined

I am now learning tensorflow but am unable to get tensorboard to work. I tried the simple program below with no luck. The program works before I use the tensorboard code but when I use the tensorboard code I get the following error:
NameError: name 'tensorboard' is not defined
Please any assistance is apppreciated.
import tensorflow as tf
a = tf.constant(5, name="input_a")
b = tf.constant(3, name="input_a")
c = tf.multiply(a,b, name="mul_c")
d = tf.add(a,b, name="add_d")
e = tf.add(c,d, name="add_e")
sess = tf.Session()
sess.run(e)
output = sess.run(e)
writer = tf.summary.FileWriter('/tmp/newtest', graph=sess.graph)
print(sess.run(e))
tensorboard --logdir /tmp/newtest
I believe this is already 'answered', but, to give a sample of what I did, regarding this, and I hope it helps you or others.
This is just covering ending overhead of triggering & showing tensorboard.
import subprocess
import webbrowser
import time
logLocation = 'tflearn_logs'
print("\r\nWould you like to see the visual results (y/N)? ", end='', flush=True)
answer = input()
if answer.strip().lower() == "y":
port = str(8018)
print("Starting Tensorboard to visualize... ")
process = subprocess.Popen(['tensorboard', "--logdir='" + logLocation + "'", '--port=' + port])
# Wait for a few seconds, give the tensorboard a headstart
time.sleep(5)
print("Opening Tensorboard webpage... ")
url = 'http://127.0.0.1:' + port + '/'
# Path differs per OS (Windows, Linux, iOS)
chrome_path = 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s'
webbrowser.get(chrome_path).open(url)
print("Press enter to quit... ", end='', flush=True)
answer = input()
if process is not None:
process.kill()

Hadoop commands from python script?

I have multiple hadoop commands to be run and these are going to be invoked from a python script. Currently, I tried the following way.
import os
import xml.etree.ElementTree as etree
import subprocess
filename = "sample.xml"
__currentlocation__ = os.getcwd()
__fullpath__ = os.path.join(__currentlocation__,filename)
tree = etree.parse(__fullpath__)
root = tree.getroot()
hivetable = root.find("hivetable").text
dburl = root.find("dburl").text
username = root.find("username").text
password = root.find("password").text
tablename = root.find("tablename").text
mappers = root.find("mappers").text
targetdir = root.find("targetdir").text
print hivetable
print dburl
print username
print password
print tablename
print mappers
print targetdir
p = subprocess.call(['hadoop','fs','-rmr',targetdir],stdout = subprocess.PIPE, stderr = subprocess.PIPE)
But, the code is not working.It is neither throwing an error not deleting the directory.
I suggest you slightly change your approach, or this is how I'm doing it. I make use of python library import commands which then depends how you will use it (https://docs.python.org/2/library/commands.html).
Here is a lil demo:
import commands as com
print com.getoutput('hadoop fs -ls /')
This gives you output like (depending on what you have in the HDFS dir )
/usr/local/Cellar/hadoop/2.7.3/libexec/etc/hadoop/hadoop-env.sh: line 25: /Library/Java/JavaVirtualMachines/jdk1.8.0_112.jdk/Contents/Home: Is a directory
Found 2 items
drwxr-xr-x - someone supergroup 0 2017-03-29 13:48 /hdfs_dir_1
drwxr-xr-x - someone supergroup 0 2017-03-24 13:42 /hdfs_dir_2
Note: the lib commands doesn't work with python 3 (to my knowledge), I'm using python 2.7.
Note: Be aware of the limitation of commands
If you will use subprocess which is the equivalent to commands for python 3 then you might consider to find a proper way to deal with your 'pipelines'. I find this discussion useful in that sense: (subprocess popen to run commands (HDFS/hadoop))
I hope this suggestion helps you!
Best

Collectstatic and S3: not finding updated files

I am using Amazon S3 to store static files for a Django project, but collectstatic is not finding updated files - only new ones.
I've been looking for an answer for ages, and my guess is that I have something configured incorrectly. I followed this blog post to help get everything set up.
I also ran into this question which seems identical to my problem, but I have tried all the solutions already.
I even tried using this plugin which is suggested in this question.
Here is some information that might be useful:
settings.py
...
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
...
# S3 Settings
AWS_STORAGE_BUCKET_NAME = os.environ['AWS_STORAGE_BUCKET_NAME']
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
S3_URL = 'http://%s.s3.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME
STATIC_URL = S3_URL
AWS_PRELOAD_METADATA = False
requirements.txt
...
Django==1.5.1
boto==2.10.0
django-storages==1.1.8
python-dateutil==2.1
Edit1:
I apologize if this question is too unique to my own circumstances to be of any help to a large audience. Nonetheless - this is has been hampering my productivity for a long time and I have wasted many hours looking for solutions, so I am starting a bounty to reward anyone who can help troubleshoot this problem.
Edit2:
I just ran across a similar problem somewhere. I am in a different timezone than the location of my AWS bucket. If by default collectstatic uses time stamp, could this interfere with the process?
Thanks
I think I solved this problem. Like you, I have spent so many hours on this problem. I am also on the bug report you found on bitbucket. Here is what I just accomplished.
I had
django-storages==1.1.8
Collectfast==0.1.11
This does not work at all. Delete everything for once does not work either. After that, it cannot pick up modifications and refuse to update anything.
The problem is with our time zone. S3 will say the files it has is last modified later than the ones we want to upload. django collectstatic will not try to copy the new ones over at all. And it will call the files "unmodified". For example, here is what I see before my fix:
Collected static files in 0:00:45.292022.
Skipped 407 already synced files.
0 static files copied, 1 unmodified.
My solution is "To the hell with modified time!". Besides the time zone problems that we are solving here, what if I made a mistake and need to roll back? It will refuse to deploy the old static files and leave my website broken.
Here is my pull request to Collectfast https://github.com/FundedByMe/collectfast/pull/11 . I still left a flag so if you really want to check modified time, you can still do it. Before it got merged, just use my code at https://github.com/sunshineo/collectfast
You have a nice day!
--Gordon
PS: Stayed up till 4:40am for this. My day is ruined for sure.
After hours of digging around, I found this bug report.
I changed my requirements to revert to a previous version of Django storages.
django-storages==1.1.5
You might want to consider using this plugin written by antonagestam on Github:
https://github.com/FundedByMe/collectfast
It compares the checksum of the files, which is a guaranteed way of determining when a file has changed. It's the accepted answer at this other stackoverflow question: Faster alternative to manage.py collectstatic (w/ s3boto storage backend) to sync static files to s3?
There are some good answers here but I spent some time on this today so figured I'd contribute one more in case it helps someone in the future. Following advice found in other threads I confirmed that, for me, this was indeed caused by a difference in time zone. My django time wasn't incorrect but was set to EST and S3 was set to GMT. In testing, I reverted to django-storages 1.1.5 which did seem to get collectstatic working. Partially due to personal preference, I was unwilling to a) roll back three versions of django-storages and lose any potential bug fixes or b) alter time zones for components of my project for what essentially boils down to a convenience function (albeit an important one).
I wrote a short script to do the same job as collectstatic without the aforementioned alterations. It will need a little modifying for your app but should work for standard cases if it is placed at the app level and 'static_dirs' is replaced with the names of your project's apps. It is run via terminal with 'python whatever_you_call_it.py -e environment_name (set this to your aws bucket).
import sys, os, subprocess
import boto3
import botocore
from boto3.session import Session
import argparse
import os.path, time
from datetime import datetime, timedelta
import pytz
utc = pytz.UTC
DEV_BUCKET_NAME = 'dev-homfield-media-root'
PROD_BUCKET_NAME = 'homfield-media-root'
static_dirs = ['accounts', 'messaging', 'payments', 'search', 'sitewide']
def main():
try:
parser = argparse.ArgumentParser(description='Homfield Collectstatic. Our version of collectstatic to fix django-storages bug.\n')
parser.add_argument('-e', '--environment', type=str, required=True, help='Name of environment (dev/prod)')
args = parser.parse_args()
vargs = vars(args)
if vargs['environment'] == 'dev':
selected_bucket = DEV_BUCKET_NAME
print "\nAre you sure? You're about to push to the DEV bucket. (Y/n)"
elif vargs['environment'] == 'prod':
selected_bucket = PROD_BUCKET_NAME
print "Are you sure? You're about to push to the PROD bucket. (Y/n)"
else:
raise ValueError
acceptable = ['Y', 'y', 'N', 'n']
confirmation = raw_input().strip()
while confirmation not in acceptable:
print "That's an invalid response. (Y/n)"
confirmation = raw_input().strip()
if confirmation == 'Y' or confirmation == 'y':
run(selected_bucket)
else:
print "Collectstatic aborted."
except Exception as e:
print type(e)
print "An error occured. S3 staticfiles may not have been updated."
def run(bucket_name):
#open session with S3
session = Session(aws_access_key_id='{aws_access_key_id}',
aws_secret_access_key='{aws_secret_access_key}',
region_name='us-east-1')
s3 = session.resource('s3')
bucket = s3.Bucket(bucket_name)
# loop through static directories
for directory in static_dirs:
rootDir = './' + directory + "/static"
print('Checking directory: %s' % rootDir)
#loop through subdirectories
for dirName, subdirList, fileList in os.walk(rootDir):
#loop through all files in subdirectory
for fname in fileList:
try:
if fname == '.DS_Store':
continue
# find and qualify file last modified time
full_path = dirName + "/" + fname
last_mod_string = time.ctime(os.path.getmtime(full_path))
file_last_mod = datetime.strptime(last_mod_string, "%a %b %d %H:%M:%S %Y") + timedelta(hours=5)
file_last_mod = utc.localize(file_last_mod)
# truncate path for S3 loop and find object, delete and update if it has been updates
s3_path = full_path[full_path.find('static'):]
found = False
for key in bucket.objects.all():
if key.key == s3_path:
found = True
last_mode_date = key.last_modified
if last_mode_date < file_last_mod:
key.delete()
s3.Object(bucket_name, s3_path).put(Body=open(full_path, 'r'), ContentType=get_mime_type(full_path))
print "\tUpdated : " + full_path
if not found:
# if file not found in S3 it is new, send it up
print "\tFound a new file. Uploading : " + full_path
s3.Object(bucket_name, s3_path).put(Body=open(full_path, 'r'), ContentType=get_mime_type(full_path))
except:
print "ALERT: Big time problems with: " + full_path + ". I'm bowin' out dawg, this shitz on u."
def get_mime_type(full_path):
try:
last_index = full_path.rfind('.')
if last_index < 0:
return 'application/octet-stream'
extension = full_path[last_index:]
return {
'.js' : 'application/javascript',
'.css' : 'text/css',
'.txt' : 'text/plain',
'.png' : 'image/png',
'.jpg' : 'image/jpeg',
'.jpeg' : 'image/jpeg',
'.eot' : 'application/vnd.ms-fontobject',
'.svg' : 'image/svg+xml',
'.ttf' : 'application/octet-stream',
'.woff' : 'application/x-font-woff',
'.woff2' : 'application/octet-stream'
}[extension]
except:
'ALERT: Couldn\'t match mime type for '+ full_path + '. Sending to S3 as application/octet-stream.'
if __name__ == '__main__':
main()
I had a similar problem pushing new files to a S3 bucket (previously working well), but is not problem about django or python, on my end I fixed the issue when I deleted my local repository and cloned it again.