I'm trying to print a text a text file from a webserver in a python program but I am receiving errors. Any help would be greatly appreciated, here is my code:
import RPi.GPIO as GPIO
import urllib2
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BOARD)
GPIO.setup(5,GPIO.OUT)
true = 1
while(true):
try:
response = urllib2.urlopen('http://148.251.158.132/k.txt')
status = response.read()
except urllib2.HTTPError, e:
print e.code
except urllib2.URLError, e:
print e.args
print status
if status=='bulbion':
GPIO.output(5,True)
elif status=='bulbioff':
GPIO.output(5,False)
By your comments, it appears your error: "SyntaxError: Missing parentheses in call to print", is caused by excluding parentheses/brackets in your print statements. People usually experience these errors after they update their python version, as the old print statements never required parentheses. The other error: "SyntaxError: unindent does not match any outer indentation level", is because your print statement on line 16 is one space behind all of your other statements on that indentation level, you can fix this problem by moving the print statement one space forward.
Changing your code to this should fix the problems:
import RPi.GPIO as GPIO
import urllib2
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BOARD)
GPIO.setup(5,GPIO.OUT)
true = 1
while(true):
try:
response = urllib2.urlopen('http://148.251.158.132/k.txt')
status = response.read()
except urllib2.HTTPError, e:
print (e.code)
except urllib2.URLError, e:
print (e.args)
print (status)
if status=='bulbion':
GPIO.output(5,True)
elif status=='bulbioff':
GPIO.output(5,False)
Hope this helps!
Related
I just wrote a simple webscraping script to give me all the episode links on a particular site's page. The script was working fine, but, now it's broke. I didn't change anything.
Try this URL (For scraping ) :- http://www.crunchyroll.com/tabi-machi-late-show
Now, the script works mid-way and gives me an error stating, ' Element not found in the cache - perhaps the page has changed since it was looked up'
I looked it up on internet and people said about using the 'implicit wait' command at certain places. I did that, still no luck.
UPDATE : I tried this script in a demote desktop and it's working there without any problems.
Here's my script :-
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import os
import time
from subprocess import Popen
#------------------------------------------------
try:
Link = raw_input("Please enter your Link : ")
if not Link:
raise ValueError('Please Enter A Link To The Anime Page. This Application Will now Exit in 5 Seconds.')
except ValueError as e:
print(e)
time.sleep(5)
exit()
print 'Analyzing the Page. Hold on a minute.'
driver = webdriver.Firefox()
driver.get(Link)
assert "Crunchyroll" in driver.title
driver.implicitly_wait(5) # <-- I tried removing this lines as well. No luck.
elem = driver.find_elements_by_xpath("//*[#href]")
driver.implicitly_wait(10) # <-- I tried removing this lines as well. No luck.
text_file = open("BatchLink.txt", "w")
print 'Fetching The Links, please wait.'
for elem in elem:
x = elem.get_attribute("href")
#print x
text_file.write(x+'\n')
print 'Links have been fetched. Just doing the final cleaning now.'
text_file.close()
CleanFile = open("queue.txt", "w")
with open('BatchLink.txt') as f:
mylist = f.read().splitlines()
#print mylist
with open('BatchLink.txt', 'r') as inF:
for line in inF:
if 'episode' in line:
CleanFile.write(line)
print 'Please Check the file named queue.txt'
CleanFile.close()
os.remove('BatchLink.txt')
driver.close()
Here's a screenshot of the error (might be of some help) :
http://i.imgur.com/SaANlsg.png
Ok i didn't work with python but know the problem
you have variable that you init -> elem = driver.find_elements_by_xpath("//*[#href]")
after that you doing some things with it in loop
before you finishing the loop try to init this variable again
elem = driver.find_elements_by_xpath("//*[#href]")
The thing is that the DOM is changes and you loosing the element collection.
I was reading a similar question Returning error string from a function in python. While I experimenting to create something similar in an Object Oriented programming so I could learn a few more things I got lost.
I am using Python 2.7 and I am a beginner on Object Oriented programming.
I can not figure out how to make it work.
Sample code checkArgumentInput.py:
#!/usr/bin/python
__author__ = 'author'
class Error(Exception):
"""Base class for exceptions in this module."""
pass
class ArgumentValidationError(Error):
pass
def __init__(self, arguments):
self.arguments = arguments
def print_method(self, input_arguments):
if len(input_arguments) != 3:
raise ArgumentValidationError("Error on argument input!")
else:
self.arguments = input_arguments
return self.arguments
And on the main.py script:
#!/usr/bin/python
import checkArgumentInput
__author__ = 'author'
argsValidation = checkArgumentInput.ArgumentValidationError(sys.argv)
if __name__ == '__main__':
try:
result = argsValidation.validate_argument_input(sys.argv)
print result
except checkArgumentInput.ArgumentValidationError as exception:
# handle exception here and get error message
print exception.message
When I am executing the main.py script it produces two blank lines. Even if I do not provide any arguments as input or even if I do provide argument(s) input.
So my question is how to make it work?
I know that there is a module that can do that work for me, by checking argument input argparse but I want to implement something that I could use in other cases also (try, except).
Thank you in advance for the time and effort reading and replying to my question.
OK. So, usually the function sys.argv[] is called with brackets in the end of it, and with a number between the brackets, like: sys.argv[1]. This function will read your command line input. Exp.: sys.argv[0] is the name of the file.
main.py 42
In this case main.py is sys.argv[0] and 42 is sys.argv[1].
You need to identifi the string you're gonna take from the command line.
I think that this is the problem.
For more info: https://docs.python.org/2/library/sys.html
I made some research and I found this useful question/ answer that helped me out to understand my error: Manually raising (throwing) an exception in Python
I am posting the correct functional code under, just in case that someone will benefit in future.
Sample code checkArgumentInput.py:
#!/usr/bin/python
__author__ = 'author'
class ArgumentLookupError(LookupError):
pass
def __init__(self, *args): # *args because I do not know the number of args (input from terminal)
self.output = None
self.argument_list = args
def validate_argument_input(self, argument_input_list):
if len(argument_input_list) != 3:
raise ValueError('Error on argument input!')
else:
self.output = "Success"
return self.output
The second part main.py:
#!/usr/bin/python
import sys
import checkArgumentInput
__author__ = 'author'
argsValidation = checkArgumentInput.ArgumentLookupError(sys.argv)
if __name__ == '__main__':
try:
result = argsValidation.validate_argument_input(sys.argv)
print result
except ValueError as exception:
# handle exception here and get error message
print exception.message
The following code prints: Error on argument input! as expected, because I violating the condition.
Any way thank you all for your time and effort, hope this answer will help someone else in future.
Currently I am practicing on the basic concept of accessing web using python. I am following a tutorial on YouTube and was guided till the following code.
from urllib2 import urlopen, HTTPError
from BeautifulSoup import BeautifulSoup
import re
url="http://getbusinessreviews.org/"
try:
webpage = urlopen(url).read
except HTTPError, e:
if e.code == 404:
e.msg = 'data not found on remote: %s' % e.msg
raise
pathFinderTitle = re.compile('<h2 class="entry-title"><a href.* rel="bookmark">(.*)</a></h2>')
if webpage:
if pathFinderTitle:
findPathTitle = re.findall(pathFinderTitle,webpage)
else:
print "unable to get path finder title"
else:
print "unable to url open "
listIterator =[]
listIterator[:]= range(2,10)
for i in listIterator:
print findPathTitle[i]
i want to extract "Nutracoster" from the following HTML
<h2 class="entry-title">
Nutracoster
</h2>
I've got two questions
I am getting no results at the moment can any one guide me what am I doing wrong?(I guess my regular expression is not well defined)
How can i pass this Regular expression to Beautifulsoup ?
Thanks in advance and sorry for any silly mistakes since i am at learning stage :D
You doesn't need to use a regex to select an element with Beautiful Soup: it can extract all the <h2> tags with specific attributes by itself.
Further, it's better to not use a regex to parse HTML (see this popular question).
Try this little snippet of code:
from bs4 import BeautifulSoup as BS
from urllib2 import urlopen, HTTPError, URLError
url = "http://getbusinessreviews.org/"
try:
webpage = urlopen(url)
except HTTPError, e:
if e.code == 404:
e.msg = 'data not found on remote: %s' % e.msg
raise
except URLError, e:
print e.args
soup = BS(webpage, 'lxml')
## Relevant lines ##
for h2 in soup.find_all("h2", attrs={"class": "entry-title"}):
print h2.text
My codes try to collect tweets about "cars" on 2014-10-01. In attempt to handle the rate limit or any other Twitter-related errors (ie. over capacity), I implement code at the end telling the program to stop and wait for 20min whenever a TweepError has occur.
Unfortunately, it doesn't work as the script crashes and I can still see the rate limit error message. Please advice, thanks.
import tweepy
import time
import csv
ckey = "xxx"
csecret = "xxx"
atoken = "xxx-xxx"
asecret = "xxx"
OAUTH_KEYS = {'consumer_key':ckey, 'consumer_secret':csecret,
'access_token_key':atoken, 'access_token_secret':asecret}
auth = tweepy.OAuthHandler(OAUTH_KEYS['consumer_key'], OAUTH_KEYS['consumer_secret'])
api = tweepy.API(auth)
startSince = '2014-10-01'
endUntil = '2014-10-02'
searchTerms = 'cars'
for tweet in tweepy.Cursor(api.search, q=searchTerms,
since=startSince, until=endUntil).items(999999999):
try:
print "Name:", tweet.author.name.encode('utf8')
print "Screen-name:", tweet.author.screen_name.encode('utf8')
print "Tweet created:", tweet.created_at
except tweepy.error.TweepError:
time.sleep(60*20)
continue
except tweepy.TweepError:
time.sleep(60*20)
continue
except TweepError:
time.sleep(60*20)
continue
except IOError:
time.sleep(60*5)
continue
except StopIteration:
break
Your issue is that your try-except statement happens independent of your call to the Twitter API. The tweepy.Cursor is what triggers the rate-limit error. Try including this line:
for tweet in tweepy.Cursor(api.search, q=searchTerms,
since=startSince, until=endUntil).items(999999999):
within your try and see if the TweepError is caught (it should be). You may need a small modification to get the cursor to continue from the proper location but it should be trivial.
I've written one small python script to fix the checksum of L3-4 protocols using scapy. When I'm running the script it is not taking command line argument or may be some other reason it is not generating the fix checksum pcap. I've verified the rdpcap() from scapy command line it is working file using script it is not getting executed. My program is
import sys
import logging
logging.getLogger("scapy").setLevel(1)
try:
from scapy.all import *
except ImportError:
import scapy
if len(sys.argv) != 3:
print "Usage:./ChecksumFixer <input_pcap_file> <output_pcap_file>"
print "Example: ./ChecksumFixer input.pcap output.pcap"
sys.exit(1)
#------------------------Command Line Argument---------------------------------------
input_file = sys.argv[1]
output_file = sys.argv[2]
#------------------------Get The layer and Fix Checksum-------------------------------
def getLayer(p):
for paktype in (scapy.IP, scapy.TCP, scapy.UDP, scapy.ICMP):
try:
p.getlayer(paktype).chksum = None
except: AttributeError
pass
return p
#-----------------------FixPcap in input file and write to output fi`enter code here`le----------------
def fixpcap():
paks = scapy.rdpcap(input_file)
fc = map(getLayer, paks)
scapy.wrpcap(output_file, fc)
The reason your function is not executed is that you're not invoking it. Adding a call to fixpcap() at the end of your script shall fix this issue.
Furthermore, here are a few more corrections & suggestions:
The statement following except Exception: should be indented as well, as follows:
try:
from scapy.all import *
except ImportError:
import scapy
Use argparse to parse command-line arguments.
Wrap your main code in a if __name__ == '__main__': block.