Saving a file without closing it in Python - python-2.7

Suppose I have a dictionary of files that I am iterating through. I am doing some with each file and then writing it to report (Note: not using the csv mod).
file_list = ['f1', 'f2', 'f3', 'f4']
report = "C:/reports/report_%s"%(timestamp)
r = open(report, "w')
What happens if something happens in f3 that crashes the script before it finishes. I can use try-catch to handle for an error but I don't want to just close the report. Perhaps I want the script to continue. Perhaps there is a power failure while the script is running. Perhaps there are multiple try-catch statements and I don't want to close for each error. Essentially, I just want to save the file without closing it on each iteration of the list, so that if a crash occurs, I can still retrieve the data written to the report up until that point. How can I do this? I cannot simply do report.save(), right? I thought about using flush() with os.fsync() as explained in another question, but I am not 100% sure that's applicable to my scenario. Any suggestion on how to achieve my goal here?
try:
....do stuff...
report.write(<stuff_output> + "\n")
try:
....do more stuff....
report.write(<stuff_output> + "\n")
except:
continue
report.close()
except Exception as e:
pass

It appears I was able resolve this issue by simply using flush() and os.fsync() within the correct scope and placing the r.close() outside of the try. So even if it tries and fails it passes or continues and at the end it closes:
try:
for item in file_list:
try:
r.write("This is item: " + item + "\n")
except:
r.flush()
os.fsync(r)
continue
except Exception as e:
pass
r.close()
This would always print "This is item: f1", "This is item: f2", "This is item: f3" to the report.

Related

AndroidViewClient: How to check if "Id/Text/Image" exist before touch it?

How can I check if exist an ID/Text before touch it?
I was trying with this:
# class=android.widget.ImageView
com_evernote___id_close = vc.findViewByIdOrRaise("com.evernote:id/close")
if not com_evernote___id_close:
vc.sleep(1)
else:
com_evernote___id_close.touch()
After LogIn on Evernote. It sometimes shows some help info. So if it exits I want to close if not the script continue executing.
And when it does not exist shows this error:
File "/usr/local/lib/python2.7/dist-packages/androidviewclient-11.0.10-py2.7.egg/com/dtmilano/android/viewclient.py", line 3352, in findViewByIdOrRaise
raise ViewNotFoundException("ID", viewId, root)
com.dtmilano.android.viewclient.ViewNotFoundException: Couldn't find View with ID='com.evernote:id/close' in tree with root=ROOT
If you don't want to raise an Exception if the View is not found then use ViewClient.findViewById() instead of ViewClient.findViewByIdOrRaise().
Then check if the returned values is not None. That simple!

Adding if or try except blok in python function

I got this question i'm a little unsure how to solve:
"There are a NoneType error when reproducing the code. The getaddressdata() returns a None value. This can be fixed by adding an if-statement in the getpricelist() to see if the data is None. Use a try except block to handle invalid data."
Have to fix this before my code can run.
my function / code you see here:
def getpricelist( ):
l1=[]
for line in file('addresslist.txt'):
data=getaddressdata(line.strip( ),'Cambridge,MA')
if data != 'None':
l1.append(data)
return l1
Where do i make the try / except blok??
You should use pythonic idiom is None to check if the variable is of a NoneType or not:
data = getaddressdata(line.strip( ),'Cambridge,MA')
if data is not None:
l1.append(data)
Also see:
not None test in Python
What is the difference between " is None " and " ==None "
Hope that helps.

django - iterating over return render_to_response

I would like to read a file, update the website, read more lines, update the site, etc ...The logic is below but it's not working.
It only shows the first line from the logfile and stops. Is there a way to iterate over 'return render_to_response'?
#django view calling a remote python script that appends output to the logfile
proc = subprocess.Popen([program, branch, service, version, nodelist])
logfile = 'text.log'
fh = open(logfile, 'r')
while proc.poll() == None:
where = fh.tell()
line = fh.read()
if not line:
time.sleep(1)
fh.seek(where,os.SEEK_SET)
else:
output = cgi.escape(line)
output = line.replace('\n\r', '<br>')
return render_to_response('hostinfo/deployservices.html', {'response': output})
Thank you for your help.
You can actually do this, by making your function a generator - that is, using 'yield' to return each line.
However, you would need to create the response directly, rather than using render to response.
render_to_response will render the first batch to the website and stop. Then the website must call this view again somehow if you want to send the next batch. You will also have to maintain a record of where you were in the log file so that the second batch can be read from that point.
I assume that you have some logic in the templates so that the second post to render_to_response doesnt overwrite the first
If your data is not humongous, you should explore sending over the entire contents you want to show on the webpage each time you read some new lines.
Instead of re-inventing the wheel, use django_logtail

Django - Passing a filtered result to a template

Inside of my Django view I am trying to retrieve results from my database and then pass them on to my template with the following code:
f = request.GET.get('f')
try:
fb_friends_found= UserProfile.objects.filter(facebookid__in=f).values('facebookid')
i = fb_friends_found[0] #To get the dictionary inside of the list
results = i['facebookid'] #To retrieve the value for the 'facebookid' key
variables = RequestContext (request, {'results': results })
return render_to_response('findfriends.html', variables)
I carried out the first three lines within the 'try' block using manage.py shell and this worked fine, printing the correct 'facebookid'.
Unfortunately I can't get it to work in my browser. Any suggestions?
Do you have a specific problem you're running into, such as an exception?
I feel like you should get some kind of exception if you have a try block without an except statement.
try:
# something
except Exception: # but be more specific
print "exception occurred"
Otherwise, the code looks good, and if nothing is rendering in your browser, I'd look into the template. Unless... you're hiding errors in your try block, in which case you should remove the try block and let the error occur to understand what's wrong.

Cant determine whats causing an regex error, and would like some input on the efficiency of my program

Nearing what I would like to think is completion on a tool I've been working on. What I've got going on is some code that does essentially this:
open several files and urls which consist of known malware/phishing related websites/domains and create a list for each, Parse the html of a url passed when the method is called, pulling out all the a href links and placing them in a separate list,
for every link that was placed in the new list, create a regex for every item thats in the malware and phishing lists, and then compare against to determine if any of the links parsed from the URL passed when the method was called are malicious.
The problem I've ran into is in iterating over the items of all 3 lists, obviously I'm doing it wrong since its throwing this error at me:
File "./test.py", line 95, in <module>
main()
File "./test.py", line 92, in main
crawler.crawl(url)
File "./test.py", line 41, in crawl
self.reg1 = re.compile(link1)
File "/usr/lib/python2.6/re.py", line 190, in compile
return _compile(pattern, flags)
File "/usr/lib/python2.6/re.py", line 245, in _compile
raise error, v # invalid expression
sre_constants.error: multiple repeat
The following is the segment of code I'm having problems with, with the malware related list create omitted as that part is working fine for me:
def crawl(self, url):
try:
doc = parse("http://" + url).getroot()
doc.make_links_absolute("http://" + url, resolve_base_href=True)
for tag in doc.xpath("//a[#href]"):
old = tag.get('href')
fixed = urllib.unquote(old)
self.links.append(fixed)
except urllib.error.URLERROR as err:
print(err)
for tgt in self.links:
for link in self.mal_list:
self.reg = re.compile(link)
for link1 in self.phish_list:
self.reg1 = re.compile(link1)
found = self.reg.search(tgt)
if found:
print(found.group())
else:
print("No matches found...")
Can anyone spot what I've done wrong with the for loops and list iteration that would be causing that regex error? How might I fix it? And probably most importantly is the way I'm going about doing this 'pythonic' or even efficient? Considering what I'm trying to do here, is there a better way of doing it?
It seems like your problem is that some of the URLs contain special regex characters, such as ? and +; for instance, the string ++ is really quite likely. The other problem is that you keep overwriting the regex you're using to test. If you just need to check if one string is contained in another, there's no need for a regex; just use
for tgt in self.links:
for link in (self.mal_list + self.phish_list):
if link in tgt: print link
And if you're just comparing for equality, you can use == instead of in.