How to upgrade Django Q and Django without clearing the queue? - django

When deploying the project after upgrading Django and Django-Q.
I got the following log.
Is there a way to avoid that error but still keep the tasks running in the queue to avoid downtime?
08:59:03 [Q] INFO Process-1:440 pushing tasks at 12192
Process Process-1:440:
Traceback (most recent call last):
File “/usr/lib/python3.6/multiprocessing/process.py”, line 258, in _bootstrap
self.run()
File “/usr/lib/python3.6/multiprocessing/process.py”, line 93, in run
self._target(*self._args, **self._kwargs)
File “/home/ubuntu/py34env/lib/python3.6/site-packages/django_q/cluster.py”, line 340, in pusher
task = SignedPackage.loads(task[1])
File “/home/ubuntu/py34env/lib/python3.6/site-packages/django_q/signing.py”, line 31, in loads
serializer=PickleSerializer)
File “/home/ubuntu/py34env/lib/python3.6/site-packages/django_q/core_signing.py”, line 36, in loads
return serializer().loads(data)
File “/home/ubuntu/py34env/lib/python3.6/site-packages/django_q/signing.py”, line 44, in loads
return pickle.loads(data)
AttributeError: Can’t get attribute ‘simple_class_factory’ on <module ‘django.db.models.base’ from ‘/home/ubuntu/py34env/lib/python3.6/site-packages/django/db/models/base.py’>

Related

Failed Manage.py runserver command

I'm a beginner in web development. I'm using pycharm and django 2.1 framework
I installed django using ('py -m pip install django==2.1') and it is done.
I started myweb project using ('py -m django-admin startproject myweb .') and it also done
but when I try ('manage.py runserver') command, this is the result:
(venv) C:\Users\مرحبا\PycharmProjects\Myweb>manage.py runserver
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
March 27, 2020 - 20:08:58
Django version 3.0.4, using settings 'myweb.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CTRL-BREAK.
Exception in thread <bound method Thread.name of <Thread(django-main-thread,
started
daemon 5152)>>:
Traceback (most recent call last):
File "C:\Users\مرحبا\AppData\Local\Programs\lib\threading.py", line 917, in
_bootstrap_inner
self.run()
File "C:\Users\مرحبا\AppData\Local\Programs\lib\threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\مرحبا\AppData\Local\Programs\lib\site-
packages\django\utils\autoreload.py", line 53, in wrapper
fn(*args, **kwargs)
File "C:\Users\مرحبا\AppData\Local\Programs\lib\site-
packages\django\core\management\commands\runserver.py", line 139, in inner_run
ipv6=self.use_ipv6, threading=threading, server_cls=self.server_cls)
File "C:\Users\مرحبا\AppData\Local\Programs\lib\site-
packages\django\core\servers\basehttp.py", line 206, in run
httpd = httpd_cls(server_address, WSGIRequestHandler, ipv6=ipv6)
File "C:\Users\مرحبا\AppData\Local\Programs\lib\site-
packages\django\core\servers\basehttp.py", line 67, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\مرحبا\AppData\Local\Programs\lib\socketserver.py", line 449, in
__init__
self.server_bind()
File "C:\Users\مرحبا\AppData\Local\Programs\lib\wsgiref\simple_server.py", line
50,
in server_bind
HTTPServer.server_bind(self)
File "C:\Users\مرحبا\AppData\Local\Programs\lib\http\server.py", line 139, in
server_bind
self.server_name = socket.getfqdn(host)
File "C:\Users\مرحبا\AppData\Local\Programs\lib\socket.py", line 680, in getfqdn
aliases.insert(0, hostname)
AttributeError: 'str' object has no attribute 'insert'
Could you help me please?
In the last line of the error it says
aliases.insert(0, hostname) AttributeError: 'str' object has no attribute 'insert'
Your Aliases variables is a string, not a list, so you can't .insert() to it, as that functionality doesn't exist.
You need to make sure Aliases is a list in your code.

Q: Sonos Python Self Test error: No handlers could be found for logger "smapi"

I am trying to run the SONOS self test for a music service on Sonos.
After getting the dependencies, and filling out the config file, I try to run the python Sonos selftest, however it runs into an error and I have no clue what the underlying issue might be to get it running:
No handlers could be found for logger "smapi"
Traceback (most recent call last):
File "suite_selftest.py", line 226, in <module>
nightly_mode(parser.config_file)
File "suite_selftest.py", line 51, in nightly_mode
development_mode(config_file)
File "suite_selftest.py", line 186, in development_mode
fixtures.append(getlastupdate.PollingIntervalTest(suite.client, suite.smapiservice))
File "/Users/thomas/Desktop/PythonSelfTest/smapi/content_workflow/getlastupdate.py", line 20, in __init__
self.poll_interval = self.smapiservice.get_polling_interval()
File "../../sonos-1.1.0.dev_r300235-py2.7.egg/sonos/smapi/smapiservice.py", line 465, in get_polling_interval
File "/usr/local/Cellar/python#2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ConfigParser.py", line 362, in getfloat
return self._get(section, float, option)
File "/usr/local/Cellar/python#2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ConfigParser.py", line 356, in _get
return conv(self.get(section, option))
ValueError: could not convert string to float:
Found the fix already, forgot to add the Polling Interval in the config file...

Pipeline will fail on GCP when writing tensorflow transform metadata

I hope somebody here can help. I've been googling this error like crazy but haven't found anything.
I have a pipeline that works perfectly when executed locally but it fails when executed on GCP. The following are the error messages that I get.
Workflow failed. Causes: S03:Write transform
fn/WriteMetadata/ResolveBeamFutures/CreateSingleton/Read+Write
transform fn/WriteMetadata/ResolveBeamFutures/ResolveFutures/Do+Write
transform fn/WriteMetadata/WriteMetadata failed., A work item was
attempted 4 times without success. Each time the worker eventually
lost contact with the service. The work item was attempted on:
Traceback (most recent call last): File "preprocess.py", line 491,
in
main() File "preprocess.py", line 487, in main
transform_data(args,pipeline_options,runner) File "preprocess.py", line 451, in transform_data
eval_data |= 'Identity eval' >> beam.ParDo(Identity()) File "/Library/Python/2.7/site-packages/apache_beam/pipeline.py", line 335,
in exit
self.run().wait_until_finish() File "/Library/Python/2.7/site-packages/apache_beam/runners/dataflow/dataflow_runner.py",
line 897, in wait_until_finish
(self.state, getattr(self._runner, 'last_error_msg', None)), self) apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException:
Dataflow pipeline failed. State: FAILED, Error: Traceback (most recent
call last): File
"/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py",
line 582, in do_work
work_executor.execute() File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py",
line 166, in execute
op.start() File "apache_beam/runners/worker/operations.py", line 294, in apache_beam.runners.worker.operations.DoOperation.start
(apache_beam/runners/worker/operations.c:10607)
def start(self): File "apache_beam/runners/worker/operations.py", line 295, in
apache_beam.runners.worker.operations.DoOperation.start
(apache_beam/runners/worker/operations.c:10501)
with self.scoped_start_state: File "apache_beam/runners/worker/operations.py", line 300, in
apache_beam.runners.worker.operations.DoOperation.start
(apache_beam/runners/worker/operations.c:9702)
pickler.loads(self.spec.serialized_fn)) File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py",
line 225, in loads
return dill.loads(s) File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 277, in
loads
return load(file) File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in
load
obj = pik.load() File "/usr/lib/python2.7/pickle.py", line 858, in load
dispatchkey File "/usr/lib/python2.7/pickle.py", line 1083, in load_newobj
obj = cls.new(cls, *args) TypeError: new() takes exactly 4 arguments (1 given)
Any ideas??
Thanks,
Pedro
If the pipeline works locally but fails on GCP it's possible that you're running into a version mismatch.
What TF, tf.Transform, beam versions are you running locally and on GCP?

TypeError when using botocore to read from AWS SQS queue

I'm using a Tornado server with tornado-botocore to connect to Amazon SQS services.
When running stress tests we sometimes get the following exception:
Traceback (most recent call last):
File "/home/app/handlers/WebSocketsHandler.py", line 95, in listen_outgoing_queue
message = yield tornado.gen.Task(self.outgoing_queue.read)
File "/home/local/lib/python2.7/site-packages/tornado/gen.py", line 870, in run
value = future.result()
File "/home/local/lib/python2.7/site-packages/tornado/concurrent.py", line 215, in result
raise_exc_info(self._exc_info)
File "/home/local/lib/python2.7/site-packages/tornado/stack_context.py", line 314, in wrapped
ret = fn(*args, **kwargs)
File "/home/local/lib/python2.7/site-packages/tornado_botocore/base.py", line 70, in prepare_response
response_dict, operation_model.output_shape)
File "/home/local/lib/python2.7/site-packages/botocore/parsers.py", line 155, in parse
return self._do_error_parse(response, shape)
File "/home/.env/local/lib/python2.7/site-packages/botocore/parsers.py", line 314, in _do_error_parse
root = self._parse_xml_string_to_dom(xml_contents)
File "/home/local/lib/python2.7/site-packages/botocore/parsers.py", line 274, in _parse_xml_string_to_dom
parser.feed(xml_string)
TypeError: must be string or read-only buffer, not None
could it be caused by the concurrency?
has anyone encountered such behavior?
We are using tornado 4.2.1, botocore 0.65.0 and tonado-botocore 0.1.6
problem solved once i removed the #tornado.gen.engine decorator from the method

Errno 10053 with Python IDLE

Out of the blue today I started having this problem.. in trying to run the Python Shell from my .py file. The shell window opens with a blinking cursor. If i type anything and hit enter the following error hits. Python IDLE doesn't work at all and I only get this error.
IDLE internal error in runcode()
Traceback (most recent call last):
File "C:\Python27\lib\idlelib\rpc.py", line 235, in asyncqueue
self.putmessage((seq, request))
File "C:\Python27\lib\idlelib\rpc.py", line 332, in putmessage
n = self.sock.send(s[:BUFSIZE])
error: [Errno 10053] An established connection was aborted by the software in your host machine
Python (command line) works fine, but the IDLE does not. I've tried rebooting windows and system restore with no luck. I've been googling for answers with little success. My firewall has always been disabled. Trying to start IDLE from command line returns the following error.
C:\Python27>python.exe -m idlelib.idle
Failed to load extension 'CallTips'
Traceback (most recent call last):
File "C:\Python27\lib\idlelib\EditorWindow.py", line 1061, in load_standard_ex
tensions
self.load_extension(name)
File "C:\Python27\lib\idlelib\EditorWindow.py", line 1076, in load_extension
cls = getattr(mod, name)
AttributeError: 'module' object has no attribute 'CallTips'
----------------------------------------
Unhandled server exception!
Thread: SockThread
Client Address: ('127.0.0.1', 49552)
Request: <socket._socketobject object at 0x0176BCA8>
Traceback (most recent call last):
File "C:\Python27\lib\SocketServer.py", line 295, in _handle_request_noblock
self.process_request(request, client_address)
File "C:\Python27\lib\SocketServer.py", line 321, in process_request
self.finish_request(request, client_address)
File "C:\Python27\lib\SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "C:\Python27\lib\idlelib\rpc.py", line 503, in __init__
SocketServer.BaseRequestHandler.__init__(self, sock, addr, svr)
File "C:\Python27\lib\SocketServer.py", line 649, in __init__
self.handle()
File "C:\Python27\lib\idlelib\run.py", line 276, in handle
executive = Executive(self)
File "C:\Python27\lib\idlelib\run.py", line 315, in __init__
self.calltip = CallTips.CallTips()
AttributeError: 'module' object has no attribute 'CallTips'
Windows -32bit OS. thanks in advance for your help.
This is really weird. But a great question you posted! Nice to read.
Here could be a solution: https://stackoverflow.com/a/3277996/1320237
Why would it be impossible to import the Calltips module from the standart library? Maybe you have a Python 3 idle open? Then they interfere?
Well my impatience got the best of me. I copied my site-packages file to the desktop and uninstalled/reinstalled Python. Pasted the site-packages back into my Python directory and all is right with the world again.