Corenlp.py does not loading any modules - python-2.7

Corenlp.py does not loading any modules.
My code is given below, when I run these code I'm getting the timeout error without loading the model. How can I solve it?? Without completing the commands the code gets timeout.
class StanfordCoreNLP(object):
"""
Command-line interaction with Stanford's CoreNLP java utilities.
Can be run as a JSON-RPC server or imported as a module.
"""
def __init__(self, corenlp_path=None):
"""
Checks the location of the jar files.
Spawns the server as a process.
"""
jars = ["stanford-corenlp-3.9.2.jar",
"stanford-corenlp-3.9.2-models.jar",
"joda-time.jar",
"xom.jar",
"jollyday.jar"]
# if CoreNLP libraries are in a different directory,
# change the corenlp_path variable to point to them
if not corenlp_path:
corenlp_path = "C:/Python27/stanford-corenlp-full-2018-10-05/"
java_path = "java"
classname = "edu.stanford.nlp.pipeline.StanfordCoreNLP"
# include the properties file, so you can change defaults
# but any changes in output format will break parse_parser_results()
props = "-props default.properties"
# add and check classpaths
jars = [corenlp_path + jar for jar in jars]
for jar in jars:
if not os.path.exists(jar):
logger.error("Error! Cannot locate %s" % jar)
sys.exit(1)
# spawn the server
start_corenlp = "%s -Xmx1800m -cp %s %s %s" % (java_path, ':'.join(jars), classname, props)
if VERBOSE:
logger.debug(start_corenlp)
self.corenlp = pexpect.popen_spawn.PopenSpawn(start_corenlp)#popen_spawn.PopenS
#self.corenlp.expect(pexpect.EOF, timeout=None)
# show progress bar while loading the models
widgets = ['Loading Models: ', Fraction()]
pbar = ProgressBar(widgets=widgets, maxval=5, force_update=True).start()
#i = self.corenlp.expect( pexpect.TIMEOUT, pexpect.EOF, searchwindowsize=-1, async=False)
self.corenlp.expect("done.", timeout=20) # Load pos tagger model (~5sec)
pbar.update(1)
self.corenlp.expect("done.", timeout=200) # Load NER-all classifier (~33sec)
pbar.update(2)
self.corenlp.expect("done.", timeout=600) # Load NER-muc classifier (~60sec)
pbar.update(3)
self.corenlp.expect("done.", timeout=600) # Load CoNLL classifier (~50sec)
pbar.update(4)
self.corenlp.expect("done.", timeout=200) # Loading PCFG (~3sec)
pbar.update(5)
self.corenlp.expect("Entering interactive shell.")
pbar.finish()
## if i == 1:
## print i
## self.corenlp.sendline('yes')
## elif i == 0:
## print i
## print "Timeout"
##
## elif i == 2:
## print "EOF"
## print(self.corenlp.before)
print i
def _parse(self, text):
"""
This is the core interaction with the parser.
It returns a Python data-structure, while the parse()
function returns a JSON object
"""
# clean up anything leftover
print self, text
while True:
try:
self.corenlp.read_nonblocking (4000, 0.3)
except pexpect.TIMEOUT:
break
self.corenlp.sendline(text)
# How much time should we give the parser to parse it?
# the idea here is that you increase the timeout as a
# function of the text's length.
# anything longer than 5 seconds requires that you also
# increase timeout=5 in jsonrpc.py
print "length",len(text)
max_expected_time = min(40, 3 + len(text) / 20.0)
print max_expected_time ,type(max_expected_time )
end_time = float(time.time()) + float(max_expected_time)
incoming = ""
while True:
# Time left, read more data
try:
incoming += self.corenlp.read_nonblocking(2000, 1)
print incoming
if "\nNLP>" in incoming:
break
time.sleep(0.0001)
print time
except pexpect.TIMEOUT:
if (float(end_time) - float(time.time())) < 0:
print end_time - time.time(),end_time , time.time()
logger.error("Error: Timeout with input '%s'" % (incoming))
print logger
return {'error': "timed out after %f seconds" % max_expected_time}
else:
continue
except pexpect.EOF:
print pexpect
break
if VERBOSE:
logger.debug("%s\n%s" % ('='*40, incoming))
try:
results = parse_parser_results(incoming)
except Exception, e:
if VERBOSE:
logger.debug(traceback.format_exc())
raise e
return results
Error:
Loading Models: 0/5
Traceback (most recent call last):
File "D:\fahma\corefernce resolution\stanford-corenlp-python-master\corenlp.py", line 281, in <module>
nlp = StanfordCoreNLP()
File "D:\fahma\corefernce resolution\stanford-corenlp-python-master\corenlp.py", line 173, in __init__
self.corenlp.expect("done.", timeout=20) # Load pos tagger model (~5sec)
File "C:\Python27\lib\site-packages\pexpect\spawnbase.py", line 341, in expect
timeout, searchwindowsize, async_)
File "C:\Python27\lib\site-packages\pexpect\spawnbase.py", line 369, in expect_list
return exp.expect_loop(timeout)
File "C:\Python27\lib\site-packages\pexpect\expect.py", line 117, in expect_loop
return self.eof(e)
File "C:\Python27\lib\site-packages\pexpect\expect.py", line 63, in eof
raise EOF(msg)
EOF: End Of File (EOF).
<pexpect.popen_spawn.PopenSpawn object at 0x021863B0>
searcher: searcher_re:
0: re.compile('done.')

Related

Audio Timeout error in Speech to text API of Google Cloud

I aim to make my jarvis, which listens all the time and activates when I say hello. I learned that Google cloud Speech to Text API doesn't listen for more than 60 seconds, but then I found this not-so-famous link, where this listens for infinite duration. The author of github script says that, he has played a trick that script refreshes after 60 seconds, so that program doesn't crash.
https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/speech/cloud-client/transcribe_streaming_indefinite.py
Following is the modified version, since I wanted it to answer of my questions, followed by "hello", and not answer me all the time. Now if I ask my Jarvis, a question, which while answering takes more than 60 seconds and it doesn't get the time to refresh, the program crashes down :(
#!/usr/bin/env python
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Google Cloud Speech API sample application using the streaming API.
NOTE: This module requires the additional dependency `pyaudio`. To install
using pip:
pip install pyaudio
Example usage:
python transcribe_streaming_indefinite.py
"""
# [START speech_transcribe_infinite_streaming]
from __future__ import division
import time
import re
import sys
import os
from google.cloud import speech
from pygame.mixer import *
from googletrans import Translator
# running=True
translator = Translator()
init()
import pyaudio
from six.moves import queue
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "C:\\Users\\mnauf\\Desktop\\rehandevice\\key.json"
from commands2 import commander
cmd=commander()
# Audio recording parameters
STREAMING_LIMIT = 55000
SAMPLE_RATE = 16000
CHUNK_SIZE = int(SAMPLE_RATE / 10) # 100ms
def get_current_time():
return int(round(time.time() * 1000))
def duration_to_secs(duration):
return duration.seconds + (duration.nanos / float(1e9))
class ResumableMicrophoneStream:
"""Opens a recording stream as a generator yielding the audio chunks."""
def __init__(self, rate, chunk_size):
self._rate = rate
self._chunk_size = chunk_size
self._num_channels = 1
self._max_replay_secs = 5
# Create a thread-safe buffer of audio data
self._buff = queue.Queue()
self.closed = True
self.start_time = get_current_time()
# 2 bytes in 16 bit samples
self._bytes_per_sample = 2 * self._num_channels
self._bytes_per_second = self._rate * self._bytes_per_sample
self._bytes_per_chunk = (self._chunk_size * self._bytes_per_sample)
self._chunks_per_second = (
self._bytes_per_second // self._bytes_per_chunk)
def __enter__(self):
self.closed = False
self._audio_interface = pyaudio.PyAudio()
self._audio_stream = self._audio_interface.open(
format=pyaudio.paInt16,
channels=self._num_channels,
rate=self._rate,
input=True,
frames_per_buffer=self._chunk_size,
# Run the audio stream asynchronously to fill the buffer object.
# This is necessary so that the input device's buffer doesn't
# overflow while the calling thread makes network requests, etc.
stream_callback=self._fill_buffer,
)
return self
def __exit__(self, type, value, traceback):
self._audio_stream.stop_stream()
self._audio_stream.close()
self.closed = True
# Signal the generator to terminate so that the client's
# streaming_recognize method will not block the process termination.
self._buff.put(None)
self._audio_interface.terminate()
def _fill_buffer(self, in_data, *args, **kwargs):
"""Continuously collect data from the audio stream, into the buffer."""
self._buff.put(in_data)
return None, pyaudio.paContinue
def generator(self):
while not self.closed:
if get_current_time() - self.start_time > STREAMING_LIMIT:
self.start_time = get_current_time()
break
# Use a blocking get() to ensure there's at least one chunk of
# data, and stop iteration if the chunk is None, indicating the
# end of the audio stream.
chunk = self._buff.get()
if chunk is None:
return
data = [chunk]
# Now consume whatever other data's still buffered.
while True:
try:
chunk = self._buff.get(block=False)
if chunk is None:
return
data.append(chunk)
except queue.Empty:
break
yield b''.join(data)
def search(responses, stream, code):
responses = (r for r in responses if (
r.results and r.results[0].alternatives))
num_chars_printed = 0
for response in responses:
if not response.results:
continue
# The `results` list is consecutive. For streaming, we only care about
# the first result being considered, since once it's `is_final`, it
# moves on to considering the next utterance.
result = response.results[0]
if not result.alternatives:
continue
# Display the transcription of the top alternative.
top_alternative = result.alternatives[0]
transcript = top_alternative.transcript
# music.load("/home/pi/Desktop/rehandevice/end.mp3")
# music.play()
# Display interim results, but with a carriage return at the end of the
# line, so subsequent lines will overwrite them.
# If the previous result was longer than this one, we need to print
# some extra spaces to overwrite the previous result
overwrite_chars = ' ' * (num_chars_printed - len(transcript))
if not result.is_final:
sys.stdout.write(transcript + overwrite_chars + '\r')
sys.stdout.flush()
num_chars_printed = len(transcript)
else:
#print(transcript + overwrite_chars)
# Exit recognition if any of the transcribed phrases could be
# one of our keywords.
if code=='ur-PK':
transcript=translator.translate(transcript).text
print("Your command: ", transcript + overwrite_chars)
if "hindi assistant" in (transcript+overwrite_chars).lower():
cmd.respond("Alright. Talk to me in urdu",code=code)
main('ur-PK')
elif "english assistant" in (transcript+overwrite_chars).lower():
cmd.respond("Alright. Talk to me in English",code=code)
main('en-US')
cmd.discover(text=transcript + overwrite_chars,code=code)
for i in range(10):
print("Hello world")
break
num_chars_printed = 0
def listen_print_loop(responses, stream, code):
"""Iterates through server responses and prints them.
The responses passed is a generator that will block until a response
is provided by the server.
Each response may contain multiple results, and each result may contain
multiple alternatives; for details, see https://cloud.google.com/speech-to-text/docs/reference/rpc/google.cloud.speech.v1#streamingrecognizeresponse. Here we
print only the transcription for the top alternative of the top result.
In this case, responses are provided for interim results as well. If the
response is an interim one, print a line feed at the end of it, to allow
the next result to overwrite it, until the response is a final one. For the
final one, print a newline to preserve the finalized transcription.
"""
responses = (r for r in responses if (
r.results and r.results[0].alternatives))
music.load(r"C:\\Users\\mnauf\\Desktop\\rehandevice\\coins.mp3")
num_chars_printed = 0
for response in responses:
if not response.results:
continue
# The `results` list is consecutive. For streaming, we only care about
# the first result being considered, since once it's `is_final`, it
# moves on to considering the next utterance.
result = response.results[0]
if not result.alternatives:
continue
# Display the transcription of the top alternative.
top_alternative = result.alternatives[0]
transcript = top_alternative.transcript
# Display interim results, but with a carriage return at the end of the
# line, so subsequent lines will overwrite them.
#
# If the previous result was longer than this one, we need to print
# some extra spaces to overwrite the previous result
overwrite_chars = ' ' * (num_chars_printed - len(transcript))
if not result.is_final:
sys.stdout.write(transcript + overwrite_chars + '\r')
sys.stdout.flush()
num_chars_printed = len(transcript)
else:
print("Listen print loop", transcript + overwrite_chars)
# Exit recognition if any of the transcribed phrases could be
# one of our keywords.
if re.search(r'\b(hello)\b', transcript.lower(), re.I):
#print("Give me order")
music.play()
search(responses, stream,code)
break
elif re.search(r'\b(ہیلو)\b', transcript, re.I):
music.play()
search(responses, stream,code)
break
num_chars_printed = 0
def main(code):
cmd.respond("I am Rayhaan dot A Eye. How can I help you?",code=code)
client = speech.SpeechClient()
config = speech.types.RecognitionConfig(
encoding=speech.enums.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=SAMPLE_RATE,
language_code='en-US',
max_alternatives=1,
enable_word_time_offsets=True)
streaming_config = speech.types.StreamingRecognitionConfig(
config=config,
interim_results=True)
mic_manager = ResumableMicrophoneStream(SAMPLE_RATE, CHUNK_SIZE)
print('Say "Quit" or "Exit" to terminate the program.')
with mic_manager as stream:
while not stream.closed:
audio_generator = stream.generator()
requests = (speech.types.StreamingRecognizeRequest(
audio_content=content)
for content in audio_generator)
responses = client.streaming_recognize(streaming_config,
requests)
# Now, put the transcription responses to use.
try:
listen_print_loop(responses, stream, code)
except:
listen
if __name__ == '__main__':
main('en-US')
# [END speech_transcribe_infinite_streaming]
You can call your functions after recognition in different thread. Example:
new_thread = Thread(target=music.play)
new_thread.daemon = True # Not always needed, read more about daemon property
new_thread.start()
Or if you want just to prevent exception - you can always use try/except. Example:
with mic_manager as stream:
while not stream.closed:
try:
audio_generator = stream.generator()
requests = (speech.types.StreamingRecognizeRequest(
audio_content=content)
for content in audio_generator)
responses = client.streaming_recognize(streaming_config,
requests)
# Now, put the transcription responses to use.
listen_print_loop(responses, stream, code)
except BaseException as e:
print("Exception occurred - {}".format(str(e)))

How do I use the same data to create multiple files?

I am trying to create two files with the same data. One file to use for updating live web data and the other as a log. One file needs to be appended to and updated frequently. I can create the log fine but am struggling on how to handle the data for the second file.
I have tried using a 'with open' statement for the log file. When I try reading this into a live web page, it shows me the data that has been logged previously, and updates the data only when the file is closed.
#!/usr/bin/env python2.7
import os
import RPi.GPIO as GPIO
import time
import subprocess
#Solar Panel Script 1.0
#Set pin for Pump Relay Signal (PR = pin 29)
#Set up Pump Relay BCM5 (pin 29) as output pin in off position
GPIO.setmode(GPIO.BCM)
GPIO.setup (5, GPIO.OUT, initial=0)
GPIO.setwarnings(False)
#Load Hot Water Tank (HWT), Solar Panel (SP), and Outside Temp (OT) with OWFS
#Create CSV File for temperature data
from time import sleep, strftime, time
with open("/var/www/html/data.csv", "a") as log:
while True:
with open ("/mnt/1wire/28.C14777910F02/temperature", "r") as myfile:
HWT=myfile.read().replace('\n', '')
myfile.close()
with open ("/mnt/1wire/28.390877910402/temperature", "r") as myfile2:
SP=myfile2.read().replace('\n', '')
myfile.close()
log.write("{0},{1},{2}\n".format(strftime("%Y-%m-%d %H:%M:%S"), str(HWT), str(SP)))
#Solar Hot Water Heater Module
#Turns on PR only if SP is 10F hotter than HWT. Checks OT for frezing temps, if less than 33, PR is off.
print ('hot water: ' + HWT)
print ('solar panel: '+ SP)
flt_HWT = float(HWT)
flt_SP = float(SP)
if flt_HWT > 170:
GPIO.output(5, GPIO.LOW) #Pump Relay Off
if flt_SP > (flt_HWT + 10):
GPIO.output(5, GPIO.HIGH) #Pump Relay On
state = GPIO.input(5)
print state
sleep(20) #10 Minutes = 600
I expected the log file to allow me to collect data from it while it was open.
log.write("{0},{1},{2}\n".format(strftime("%Y-%m-%d %H:%M:%S"), str(HWT), str(SP)))
This is where you are writing the log. You can simply include another with open() statement here
with open("secondfile.log") as secfile:
log.write("{0},{1},{2}\n".format(strftime("%Y-%m-%d %H:%M:%S"), str(HWT), str(SP))) ##original log file can be here
secfile.write("{0},{1},{2}\n".format(strftime("%Y-%m-%d %H:%M:%S"), str(HWT), str(SP))) ##and here you are wrighting the second file.
However if you are wrighting multiple files it would be better to stick them into a function of their own.
def write_file(text, filename):
try:
with open(filename) as file:
file.write(text)
return True
except:
return False ##include any other exception stuff here
now you can use
success = write_file("log text", "filename.log")
if success:
success = write_file("log2 text", "filename2.log")
if success:
print("Yey both files have been written to")
else:
print("Awww, there was an error writing to the file")

Type error: 'function' object is not subscriptable with tensor flow

I'm trying to execute the code from https://github.com/lucfra/RFHO, more specifically from RFHO starting example.ipynb. The only thing I want to change is doing it in forward mode instead of reverse mode. So this is the changed code:
import tensorflow as tf
import rfho as rf
from rfho.datasets import load_mnist
mnist = load_mnist(partitions=(.05, .01)) # 5% of data in training set, 1% in validation
# remaining in test set (change these percentages and see the effect on regularization hyperparameter)
x, y = tf.placeholder(tf.float32, name='x'), tf.placeholder(tf.float32, name='y')
# define the model (here use a linear model from rfho.models)
model = rf.LinearModel(x, mnist.train.dim_data, mnist.train.dim_target)
# vectorize the model, and build the state vector (augment by 1 since we are
# going to optimize the weights with momentum)
s, out, w_matrix = rf.vectorize_model(model.var_list, model.inp[-1], model.Ws[0],
augment=0)
# (this function will print also some tensorflow infos and warnings about variables
# collections... we'll solve this)
# define error
error = tf.reduce_mean(rf.cross_entropy_loss(labels=y, logits=out), name='error')
constraints = []
# define training error by error + L2 weights penalty
rho = tf.Variable(0., name='rho') # regularization hyperparameter
training_error = error + rho*tf.reduce_sum(tf.pow(w_matrix, 2))
constraints.append(rf.positivity(rho)) # regularization coefficient should be positive
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(out, 1), tf.argmax(y, 1)),
"float"), name='accuracy')
# define learning rates and momentum factor as variables, to be optimized
eta = tf.Variable(.01, name='eta')
#mu = tf.Variable(.5, name='mu')
# now define the training dynamics (similar to tf.train.Optimizer)
optimizer = rf.GradientDescentOptimizer.create(s, eta, loss=training_error)
# add constraints for learning rate and momentum factor
constraints += optimizer.get_natural_hyperparameter_constraints()
# we want to optimize the weights w.r.t. training_error
# and hyperparameters w.r.t. validation error (that in this case is
# error evaluated on the validation set)
# we are going to use ReverseMode
hyper_dict = {error: [rho, eta]}
hyper_opt = rf.HyperOptimizer(optimizer, hyper_dict, method=rf.ForwardHG)
# define helper for stochastic descent
ev_data = rf.ExampleVisiting(mnist.train, batch_size=2**8, epochs=200)
tr_suppl = ev_data.create_supplier(x, y)
val_supplier = mnist.validation.create_supplier(x, y)
test_supplier = mnist.test.create_supplier(x, y)
# Run all for some hyper-iterations and print progresses
def run(hyper_iterations):
with tf.Session().as_default() as ss:
ev_data.generate_visiting_scheme() # needed for remembering the example visited in forward pass
for hyper_step in range(hyper_iterations):
hyper_opt.initialize() # initializes all variables or reset weights to initial state
hyper_opt.run(ev_data.T, train_feed_dict_supplier=tr_suppl,
val_feed_dict_suppliers=val_supplier,
hyper_constraints_ops=constraints)
#
# print('Concluded hyper-iteration', hyper_step)
# print('Test accuracy:', ss.run(accuracy, feed_dict=test_supplier()))
# print('Validation error:', ss.run(error, feed_dict=val_supplier()))
saver = rf.Saver('Staring example', collect_data=False)
with saver.record(rf.Records.tensors('error', fd=('x', 'y', mnist.validation), rec_name='valid'),
rf.Records.tensors('error', fd=('x', 'y', mnist.test), rec_name='test'),
rf.Records.tensors('accuracy', fd=('x', 'y', mnist.validation), rec_name='valid'),
rf.Records.tensors('accuracy', fd=('x', 'y', mnist.test), rec_name='test'),
rf.Records.hyperparameters(),
rf.Records.hypergradients(),
): # a context to print some statistics.
# If you execute again any cell containing the model construction,
# restart the notebook or reset tensorflow graph in order to prevent errors
# due to tensor namings
run(20) # this will take some time... run it for less hyper-iterations for a quicker look
The problem is I get a Type error: 'function' object is not subscriptable back after the first iteration:
Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydev_run_in_console.py", line 52, in run_file
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/examples/simply_example.py", line 80, in <module>
run(20) # this will take some time... run it for less hyper-iterations for a quicker look
File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/examples/simply_example.py", line 63, in run
hyper_constraints_ops=constraints)
File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/save_and_load.py", line 624, in _saver_wrapped
res = f(*args, **kwargs)
File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/hyper_gradients.py", line 689, in run
hyper_batch_step=self.hyper_batch_step.eval())
File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/hyper_gradients.py", line 581, in run_all
return self.hyper_gradients(val_feed_dict_suppliers, hyper_batch_step)
File "/Users/repierau/Documents/FSHO/RFHO-master/rfho/hyper_gradients.py", line 551, in hyper_gradients
val_sup_lst.append(val_feed_dict_supplier[k])
TypeError: 'function' object is not subscriptable

Why is there still a Rate Limit error from Twitter using Tweepy?

I am trying to get all the tweets from the previous day. And to address the rate limit by Twitter, I implemented two sets of codes.
if counter == 4000:
time.sleep(60*20) # wait for 20 min every time 4,000 tweets are extracted
counter == 0
continue
I looked at the output file and usually I get the rate limit message when I have about 5500-6500 tweet entities extracted. So to be conservative, I set that every time 4000 tweets (and the associated extracted fields) are extracted, I paused it for 20 min (to cover the Twitter's designated 15-min interval).
I also found someone else's attempt to address the same issue using the following code:
except tweepy.TweepError:
time.sleep(60*20)
continue
It is supposed to pause the script when there is a TweepError, I tested it, but it didn't seem to work, but I included it anyway.
The error I got (after extracting 10,700 tweet entities) is as follows:
Traceback (most recent call last):
File "C:\Users\User\Dropbox\Python exercises\_Scraping\Social media\TweepyModule\TweepyTut1.18.py", line 32, in <module>
since='2014-09-15', until='2014-09-16').items(999999999): # changeable here
File "C:\Program Files Extra\Python27\lib\site-packages\tweepy\cursor.py", line 181, in next
self.current_page = self.page_iterator.next()
File "C:\Program Files Extra\Python27\lib\site-packages\tweepy\cursor.py", line 99, in next
data = self.method(max_id=self.max_id, parser=RawParser(), *self.args, **self.kargs)
File "C:\Program Files Extra\Python27\lib\site-packages\tweepy\binder.py", line 230, in _call
return method.execute()
File "C:\Program Files Extra\Python27\lib\site-packages\tweepy\binder.py", line 203, in execute
raise TweepError(error_msg, resp)
tweepy.error.TweepError: {"errors":[{"message":"Rate limit exceeded","code":88}]}
[Finished in 1937.2s with exit code 1]
Here is my code:
import tweepy
import time
import csv
ckey = ""
csecret = ""
atoken = ""
asecret = ""
OAUTH_KEYS = {'consumer_key':ckey, 'consumer_secret':csecret,
'access_token_key':atoken, 'access_token_secret':asecret}
auth = tweepy.OAuthHandler(OAUTH_KEYS['consumer_key'], OAUTH_KEYS['consumer_secret'])
api = tweepy.API(auth)
searchTerms = '"good book"'
counter = 0
for tweet in tweepy.Cursor(api.search, q=searchTerms,
since='2014-09-15', until='2014-09-16').items(999999999): # changeable here
try:
'''print "Name:", tweet.author.name.encode('utf8')
print "Screen-name:", tweet.author.screen_name.encode('utf8')
print "Tweet created:", tweet.created_at'''
placeHolder = []
placeHolder.append(tweet.author.name.encode('utf8'))
placeHolder.append(tweet.author.screen_name.encode('utf8'))
placeHolder.append(tweet.created_at)
with open("TweetData_goodBook_15SEP2014_all.csv", "ab") as f: # changeable here
writeFile = csv.writer(f)
writeFile.writerow(placeHolder)
counter += 1
if counter == 4000:
time.sleep(60*20) # wait for 20 min everytime 4,000 tweets are extracted
counter == 0
continue
except tweepy.TweepError:
time.sleep(60*20)
continue
except IOError:
time.sleep(60*2.5)
continue
except StopIteration:
break

Why can't an object use a method as an attribute in the Python package ComplexNetworkSim?

I'm trying to use the Python package ComplexNetworkSim, which inherits from networkx and SimPy, to simulate an agent-based model of how messages propagate within networks.
Here is my code:
from ComplexNetworkSim import NetworkSimulation, NetworkAgent, Sim
import networkx as nx
#define constants for our example of states
NO_MESSAGE = 0
MESSAGE = 1
class Message(object):
def __init__(self,topic_pref):
self.relevance = topic_pref
class myAgent(NetworkAgent):
def __init__(self, state, initialiser):
NetworkAgent.__init__(self, state, initialiser)
self.state = MESSAGE
self.topic_pref = 0.5
def Run(self):
while True:
if self.state == MESSAGE:
self.message = self.Message(topic_pref, self, TIMESTEP)
yield Sim.hold, self, NetworkAgent.TIMESTEP_DEFAULT
elif self.state == NO_MESSAGE:
yield Sim.hold, self, NetworkAgent.TIMESTEP_DEFAULT
# Network and initial states of agents
nodes = 30
G = nx.scale_free_graph(nodes)
states = [MESSAGE for n in G.nodes()]
# Simulation constants
MAX_SIMULATION_TIME = 25.0
TRIALS = 2
def main():
directory = 'test' #output directory
# run simulation with parameters
# - complex network structure
# - initial state list
# - agent behaviour class
# - output directory
# - maximum simulation time
# - number of trials
simulation = NetworkSimulation(G,
states,
myAgent,
directory,
MAX_SIMULATION_TIME,
TRIALS)
simulation.runSimulation()
if __name__ == '__main__':
main()
(There may be other problems downstream with this code and it is not fully tested.)
My problem is that the myAgent object is not properly calling the method Run as an attribute. Specifically, this is the error message that I get when I try to run the above code:
Starting simulations...
---Trial 0 ---
set up agents...
Traceback (most recent call last):
File "simmessage.py", line 55, in <module>
main()
File "simmessage.py", line 52, in main
simulation.runSimulation()
File "/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/ComplexNetworkSim-0.1.2-py2.7.egg/ComplexNetworkSim/simulation.py", line 71, in runSimulation
self.runTrial(i)
File "/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/ComplexNetworkSim-0.1.2-py2.7.egg/ComplexNetworkSim/simulation.py", line 88, in runTrial
self.activate(agent, agent.Run())
AttributeError: 'myAgent' object has no attribute 'Run'
Does anybody know why this is? I can't figure how my code differs substantially from the example in ComplexNetworkSim.
I've run your code on my machine and there the Run method gets called.
My best guess is what Paulo Scardine wrote, but since i can't reproduce the problem i can't actually debug it.