Getting "AttributeError: sender" is thrown as the exchange query iterates. Same with other values (message_id, etc) too. My only option at this point is to put a try/catch around it and need to refactor a lot of content under the loop. However, I would think the query should not be crashing under normal circumstances due to any data issue. Please let me know what could be going wrong. There appears to be a 'bad' email object that causes it?
kwargs = {"is_read": False}
kwargs["datetime_received__gt"] = some_date_time
filtered_items = my_exchange._service_account.inbox.filter(**kwargs)
filtered_items.page_size = 20
print(filtered_items.count())
3 <-- 3 objects
for sender_obj, msg_id, msg_subj, msg_text, msg_size in filtered_items.values_list("sender", "message_id", "subject", "text_body", "size").iterator():
print(sender_obj)
count = count + 1
print(count)
Mailbox(name='Some User1', email_address='someuser1#myemail.acme', routing_type='SMTP', mailbox_type='Mailbox')
1
Mailbox(name='Some User2', email_address='someuser2#myemail.acme', routing_type='SMTP', mailbox_type='OneOff')
2
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/exchangelib/queryset.py", line 273, in __iter__
yield from self._format_items(items=self._query(), return_format=self.return_format)
File "/usr/local/lib/python3.6/site-packages/exchangelib/queryset.py", line 352, in _item_yielder
yield item_func(i)
File "/usr/local/lib/python3.6/site-packages/exchangelib/queryset.py", line 380, in <lambda>
item_func=lambda i: tuple(f.get_value(i) for f in self.only_fields),
File "/usr/local/lib/python3.6/site-packages/exchangelib/queryset.py", line 380, in <genexpr>
item_func=lambda i: tuple(f.get_value(i) for f in self.only_fields),
File "/usr/local/lib/python3.6/site-packages/exchangelib/fields.py", line 189, in get_value
return getattr(item, self.field.name)
AttributeError: sender
It looks like you are trying to get the sender field of something that is not a Message. Probably your inbox contains a meeting request or some other non-message object.
I'm not sure this is a bug. What did you expect to be the result of getting the sender attribute of something that does not have a sender field?
If you want only Message objects in your list, you can try adding a filter on item_class='IPF.Note'.
Related
Why is it this code won't work and give AttributeError?
internship = parser.find_all('a', attrs = {'title': lambda job: job.startswith('Internship')})
while this one works:
internship = parser.find_all('a', attrs = {'title': lambda job: job and job.startswith('Internship')})
This is the error that I got from the first code:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\bs4\element.py", line 1299, in find_all
return self._find_all(name, attrs, text, limit, generator, **kwargs)
File "C:\Python27\lib\site-packages\bs4\element.py", line 549, in _find_all
found = strainer.search(i)
File "C:\Python27\lib\site-packages\bs4\element.py", line 1690, in search
found = self.search_tag(markup)
File "C:\Python27\lib\site-packages\bs4\element.py", line 1662, in search_tag
if not self._matches(attr_value, match_against):
File "C:\Python27\lib\site-packages\bs4\element.py", line 1722, in _matches
return match_against(markup)
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'startswith'
In the first line of code, you are getting the attribute error because the code assumes that job contains a string, which has the method startswith(), but it doesn't contain a string, it contains None.
In the second line of code, you are not getting the attribute error because the code is testing to see if job contains None, before calling startswith() on it. Another (not quite equivalent but arguably better) way to express
lambda job: job and job.startswith('Internship')
is
lambda job: job.startswith('Internship') if job else False
I am very new to tensorflow, and I try to display my first tensorboard.
I downloaded and executed ok the board for the example given here
https://www.tensorflow.org/versions/r0.7/how_tos/summaries_and_tensorboard/index.html
Following the method, I have in my code:
weights_hidden = tf.Variable(tf.truncated_normal([image_size * image_size, 1024]), name='weights_hidden')
_ = tf.histogram_summary('weights_hidden', weights_hidden)
and when I run the session
with tf.Session(graph=graph) as session:
merged = tf.merge_all_summaries()
writer = tf.train.SummaryWriter("/tmp/test", session.graph_def)
tf.initialize_all_variables().run()
for step in range(num_steps):
summary_str, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
writer.add_summary(summary_str, step)
The process crashes with the following error
Traceback (most recent call last):
File "/home/xxx/Desktop/xxx/xxx.py", line 108, in <module>
writer.add_summary(summary_str, step)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/summary_io.py", line 128, in add_summary
event = event_pb2.Event(wall_time=time.time(), summary=summary)
File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 522, in init
_ReraiseTypeErrorWithFieldName(message_descriptor.name, field_name)
File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 453, in _ReraiseTypeErrorWithFieldName
six.reraise(type(exc), exc, sys.exc_info()[2])
File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 520, in init
copy.MergeFrom(new_val)
File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 1237, in MergeFrom
"expected %s got %s." % (cls.__name__, type(msg).__name__))
TypeError: Parameter to MergeFrom() must be instance of same class: expected Summary got NoneType. for field Event.summary
What am I missing ?
Any help/comment would be very welcome
Thank you very much for the help
K.
You should write:
_, summary_str, l, predictions = session.run(
[optimizer, merged, loss, train_prediction], feed_dict=feed_dict)
I added a 4th argument merged which corresponds to the summary you are trying to get (you were only getting the result of the optimization step).
I have a cherrypy app that's got a Monitor instance like so:
mail_checker = Monitor(cherrypy.engine, self.mail_processor.poll_history_feed, frequency=10)
To put it simply it checks a gmail inbox for new emails and processes them. Sometimes poll_history_feed() will throw an exception, I'm guessing right now that its because of our unstable internet, and it ceases to run until I restart the whole app. (sample of the traceback below)
[01/Mar/2016:17:08:29] ENGINE Error in background task thread function <bound method MailProcessor.poll_history_feed of <mailservices.mailprocessor.MailProcessor object at 0x10a2f0250>>.
Traceback (most recent call last):
File "/Users/hashtaginteractive/Projects/.venvs/emaild/lib/python2.7/site-packages/cherrypy/process/plugins.py", line 500, in run
self.function(*self.args, **self.kwargs)
File "/Users/hashtaginteractive/Projects/emaild/emaild-source/mailservices/mailprocessor.py", line 12, in poll_history_feed
labelIds=["INBOX", "UNREAD"]
File "/Users/hashtaginteractive/Projects/.venvs/emaild/lib/python2.7/site-packages/oauth2client/util.py", line 142, in positional_wrapper
return wrapped(*args, **kwargs)
File "/Users/hashtaginteractive/Projects/.venvs/emaild/lib/python2.7/site-packages/googleapiclient/http.py", line 730, in execute
return self.postproc(resp, content)
File "/Users/hashtaginteractive/Projects/.venvs/emaild/lib/python2.7/site-packages/googleapiclient/model.py", line 207, in response
return self.deserialize(content)
File "/Users/hashtaginteractive/Projects/.venvs/emaild/lib/python2.7/site-packages/googleapiclient/model.py", line 262, in deserialize
content = content.decode('utf-8')
File "/Users/hashtaginteractive/Projects/.venvs/emaild/lib/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x8b in position 23: invalid start byte
Is there any way to set this up so that it automatically restarts either the server or this particular Monitor instance whenever an exception happens?
You have to wrap the call to self.mail_processor.poll_history_feed in a try/except block and log the error for convenience.
def safe_poll_history_feed(self):
try:
self.mail_processor.poll_history_feed()
except Exception:
cherrypy.engine.log("Exception in mailprocessor monitor", traceback=True)
And then use the safe_poll_history_feed method
Im still trying to make a social network with py2neo+flask+neo4j.
I've got a problem while searching my database with py2neo.I wanna find all the users that their username includes a special string.For example all the users that their username includes "dav".I wrote the code below and i dont know why i get this error...
from py2neo import Graph
graph=Graph("http://neo4j:123#localhost:7474/ ")
def search(name):
users=graph.merge("Person")
for N in users:
print N['username']
and this is my error:
Traceback (most recent call last):
File "", line 1, in
File "/home/ali/Desktop/flask/search.py", line 10, in search users=graph.cypher.execute('match (p:Person) return p'
File "/usr/local/lib/python2.7/dist-packages/py2neo/core.py", line 659, in cypher metadata = self.resource.metadata
File "/usr/local/lib/python2.7/dist-packages/py2neo/core.py", line 213, in metadata self.get()
File "/usr/local/lib/python2.7/dist-packages/py2neo/core.py", line 267, in get raise_from(self.error_class(message, **content), error)
File "/usr/local/lib/python2.7/dist-packages/py2neo/util.py", line 235, in raise_from raise exception py2neo.error.GraphError: HTTP GET returned response 404
Your URL is wrong, you should change it to this:
Graph("http://neo4j:123#localhost:7474/db/data")
Also, you can't execute cypher through the merge function, instead you should do this:
users = graph.cypher.execute('match (p:Person) return p')
So i have a django site that is giving me this AmbiguousTimeError. I have a job activates when a product is saved that is given a brief timeout before updating my search index. Looks like an update was made in the Daylight Savings Time hour, and pytz cannot figure out what to do with it.
How can i prevent this from happening the next time the hour shifts for DST?
[2012-11-06 14:22:52,115: ERROR/MainProcess] Unrecoverable error: AmbiguousTimeError(datetime.datetime(2012, 11, 4, 1, 11, 4, 335637),)
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/celery/worker/__init__.py", line 353, in start
component.start()
File "/usr/local/lib/python2.6/dist-packages/celery/worker/consumer.py", line 369, in start
self.consume_messages()
File "/usr/local/lib/python2.6/dist-packages/celery/worker/consumer.py", line 842, in consume_messages
self.connection.drain_events(timeout=10.0)
File "/usr/local/lib/python2.6/dist-packages/kombu/connection.py", line 191, in drain_events
return self.transport.drain_events(self.connection, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/kombu/transport/virtual/__init__.py", line 760, in drain_events
self._callbacks[queue](message)
File "/usr/local/lib/python2.6/dist-packages/kombu/transport/virtual/__init__.py", line 465, in _callback
return callback(message)
File "/usr/local/lib/python2.6/dist-packages/kombu/messaging.py", line 485, in _receive_callback
self.receive(decoded, message)
File "/usr/local/lib/python2.6/dist-packages/kombu/messaging.py", line 457, in receive
[callback(body, message) for callback in callbacks]
File "/usr/local/lib/python2.6/dist-packages/celery/worker/consumer.py", line 560, in receive_message
self.strategies[name](message, body, message.ack_log_error)
File "/usr/local/lib/python2.6/dist-packages/celery/worker/strategy.py", line 25, in task_message_handler
delivery_info=message.delivery_info))
File "/usr/local/lib/python2.6/dist-packages/celery/worker/job.py", line 120, in __init__
self.eta = tz_to_local(maybe_iso8601(eta), self.tzlocal, tz)
File "/usr/local/lib/python2.6/dist-packages/celery/utils/timeutils.py", line 52, in to_local
dt = make_aware(dt, orig or self.utc)
File "/usr/local/lib/python2.6/dist-packages/celery/utils/timeutils.py", line 211, in make_aware
return localize(dt, is_dst=None)
File "/usr/local/lib/python2.6/dist-packages/pytz/tzinfo.py", line 349, in localize
raise AmbiguousTimeError(dt)
AmbiguousTimeError: 2012-11-04 01:11:04.335637
EDIT: I fixed it temporarily with this code in celery:
celery/worker/job.py # line 120
try:
self.eta = tz_to_local(maybe_iso8601(eta), self.tzlocal, tz)
except:
self.eta = None
I don't want to have changes in a pip installed app, so i need to fix what i can in my code:
This runs when i save my app:
self.task_cls.apply_async(
args=[action, get_identifier(instance)],
countdown=15
)
I'm assuming that i need to somehow detect if i'm in the ambiguous time and adjust countdown.
I think i'm going to have to clear the tasks to fix this, but how can i prevent this from happening the next time the hour shifts for DST?
It's not clear what you're doing (you haven't shown any code), but basically you need to take account for the way the world works. You can't avoid having ambiguous times when you convert from local time to UTC (or to a different zone's local time) when the time goes back an hour.
Likewise you ought to be aware that there are "gap" or "impossible" times, where a reasonable-sounding local time simply doesn't occur.
I don't know what options Python gives you, but ideally an API should let you resolve ambiguous times however you want - whether that's throwing an error, giving you the earlier occurrence, the later occurrence, or something else.
Apparently, Celery solved this issue:
https://github.com/celery/celery/issues/1061