Celery Django Body Encoding - django

Hi does anyone know how the body of a celery json is encoded before it is entered in the queue cache (i use Redis in my case).
{'body': 'W1sic2hhd25AdWJ4LnBoIiwge31dLCB7fSwgeyJjYWxsYmFja3MiOiBudWxsLCAiZXJyYmFja3MiOiBudWxsLCAiY2hhaW4iOiBudWxsLCAiY2hvcmQiOiBudWxsfV0=',
'content-encoding': 'utf-8',
'content-type': 'application/json',
'headers': {'lang': 'py',
'task': 'export_users',
'id': '6e506f75-628e-4aa1-9703-c0185c8b3aaa',
'shadow': None,
'eta': None,
'expires': None,
'group': None,
'retries': 0,
'timelimit': [None, None],
'root_id': '6e506f75-628e-4aa1-9703-c0185c8b3aaa',
'parent_id': None,
'argsrepr': "('<email#example.com>', {})",
'kwargsrepr': '{}',
'origin': 'gen187209#ubuntu'},
'properties': {'correlation_id': '6e506f75-628e-4aa1-9703-c0185c8b3aaa',
'reply_to': '403f7314-384a-30a3-a518-65911b7cba5c',
'delivery_mode': 2,
'delivery_info': {'exchange': '', 'routing_key': 'celery'},
'priority': 0,
'body_encoding': 'base64',
'delivery_tag': 'dad6b5d3-c667-473e-a62c-0881a7349684'}}
Just a background I have a nodejs project which needs to trigger my celery (django). Background tasks are all in the django app but the trigger and the details will come from a nodejs app.
Thanks in advance

It may just be simpler to use the nodejs celery client
https://github.com/mher/node-celery/blob/master/celery.js
to invoke a celery task from nodejs.

Related

Django app with celery, tasks are always "PENDING". How can I fix this bug?

I have following celery settings in settings.py
CELERY_BROKER_URL = "amqp://admin:admin2017#localhost"
CELERY_IMPORTS = ("web.task",)
When I use the form to submit a task to celery I see state is always pending
Screen cap of pending tasks
the following code is used in models (I also have a tasks.py)
class AnaysisStatus(models.IntegerChoices):
PENDING = 1
COMPLETED = 2
FAILED = 0
class Analysis(models.Model):
STATUS_CHOICES = ((1,"PENDING"),(2,"COMPLETED"),(0,"FAILED"))
user = models.ForeignKey(User,on_delete=models.CASCADE)
status = models.IntegerField(choices=AnaysisStatus.choices,null= True)
created_at = models.DateTimeField(auto_now_add=True,null=True)
file = models.FileField(null=True)
data = models.JSONField(null=True)
I'm very new to celery and django so any help is greatly appreciated.
Edit: I installed rabittMQ locally and set virtual host permission and started a worker I now see this error:
Thw full contents of the message headers:
{'lang': 'py', 'task': 'web.task.switch', 'id': '250f7475-5186-4f68-a8ac-cb19802221cd', 'shadow': None, 'eta': None, 'expires': None, 'group': None, 'group_index': None, 'retries': 0, 'timelimit': [None, None], 'root_id': '250f7475-5186-4f68-a8ac-cb19802221cd', 'parent_id': None, 'argsrepr': "('admin-1652754818.sol', '0.4.24', 24)", 'kwargsrepr': '{}', 'origin': 'gen19316#MacBook-Air.hitronhub.home', 'ignore_result': False}
The delivery info for this task is:
{'consumer_tag': 'None4', 'delivery_tag': 1, 'redelivered': False, 'exchange': '', 'routing_key': 'celery'}
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/celery/worker/consumer/consumer.py", line 591, in on_task_received
strategy = strategies[type_]
KeyError: 'web.task.switch'
error - unregistered task
Edit: anyone that can help with this would greatly appreciate it, I've tried everything over the last 72 hours and getting desperate for any information that might help or point me in right direction.

How to resolve error The specified alg value is not allowed in django

I am trying to decode some JWT string that I have received from an authentication service but I'm getting an error The specified alg value is not allowed. What could be the issue? I verified that the algorithim I should use is HS256.
When I try to decode the JWT string at https://jwt.io/ it's being decoded.
code snippet
try:
print(jwt_value)
decoded = jwt.decode(jwt_value, 'secret', algorithms=['HS256'])
print(decoded)
except Exception as e:
print(e)
JWT Settings
JWT_AUTH = {
# 'JWT_EXPIRATION_DELTA': datetime.timedelta(seconds=36000),
'JWT_ENCODE_HANDLER':
'rest_framework_jwt.utils.jwt_encode_handler',
'JWT_DECODE_HANDLER':
'rest_framework_jwt.utils.jwt_decode_handler',
'JWT_PAYLOAD_HANDLER':
'sbp.custom_jwt.jwt_payload_handler',
'JWT_PAYLOAD_GET_USER_ID_HANDLER':
'rest_framework_jwt.utils.jwt_get_user_id_from_payload_handler',
'JWT_RESPONSE_PAYLOAD_HANDLER':
'sbp.custom_jwt.jwt_response_payload_handler',
'JWT_SECRET_KEY': 'secret',
'JWT_GET_USER_SECRET_KEY': None,
'JWT_PUBLIC_KEY': None,
'JWT_PRIVATE_KEY': None,
'JWT_ALGORITHM': 'HS256',
'JWT_VERIFY': True,
'JWT_VERIFY_EXPIRATION': True,
'JWT_LEEWAY': 0,
'JWT_EXPIRATION_DELTA': datetime.timedelta(seconds=36000),
'JWT_AUDIENCE': None,
'JWT_ISSUER': None,
'JWT_ALLOW_REFRESH': False,
'JWT_REFRESH_EXPIRATION_DELTA': datetime.timedelta(days=7),
'JWT_AUTH_HEADER_PREFIX': 'JWT',
'JWT_AUTH_COOKIE': None,
}

Models_committed not firing a function on model commit - FlaskSQLAlchemy

I'm trying to get a signal - models_committed - to fire a function when my models are committed. Currently just does a standard print(), but I can't get the function to fire. Tried the decorator method and models_commited.connect(func, app) method.
What I'm expecting to happen
I commit some data to my database (into a model), then signal_thing() (located in init.py) prints 'hello - is this working' to the flask run console.
What is actually happening
Data is committed to the database (shows up in my web app) but nothing is printed to console, it seems signal_thing() does not fire.
I can't find much information about how to get Signals working properly on Flask?
__init__.py
from flask import Flask
from config import Config
from flask_sqlalchemy import SQLAlchemy, models_committed, before_models_committed
def signal_thing(sender, changes, **kwargs):
print('hello - is this working?')
sender.print('hello - is this working')
models_committed.connect(signal_thing, app)
before_models_committed.connect(signal_thing, app)
decorator method
#models_commited.connect(app)
def signal_thing(sender, changes, **kwargs):
print('hello - is this working?')
sender.print('hello this worked')
config
SQLALCHEMY_TRACK_MODIFICATIONS' is set to True.
<Config {'ENV': 'production', 'DEBUG': False, 'TESTING': False, 'PROPAGATE_EXCEPTIONS': None, 'PRESERVE_CONTEXT_ON_EXCEPTION': None, 'SECRET_KEY': 'shh', 'PERMANENT_SESSION_LIFETIME': datetime.timedelta(days=31), 'USE_X_SENDFILE': False, 'SERVER_NA
ME': None, 'APPLICATION_ROOT': '/', 'SESSION_COOKIE_NAME': 'session', 'SESSION_COOKIE_DOMAIN': None, 'SESSION_COOKIE_PATH': None, 'SESSION_COOKIE_HTTPONLY': True, 'SESSION_COOKIE_SECURE': False, 'SESSION_COOKIE_SAMESITE': None, 'SESSION_REFRESH_EACH_REQUEST': True,
'MAX_CONTENT_LENGTH': None, 'SEND_FILE_MAX_AGE_DEFAULT': datetime.timedelta(seconds=43200), 'TRAP_BAD_REQUEST_ERRORS': None, 'TRAP_HTTP_EXCEPTIONS': False, 'EXPLAIN_TEMPLATE_LOADING': False, 'PREFERRED_URL_SCHEME': 'http', 'JSON_AS_ASCII': True, 'JSON_SORT_KEYS':
True, 'JSONIFY_PRETTYPRINT_REGULAR': False, 'JSONIFY_MIMETYPE': 'application/json', 'TEMPLATES_AUTO_RELOAD': None, 'MAX_COOKIE_SIZE': 4093, 'SQLALCHEMY_DATABASE_URI': 'sqlite:///C:\\Users\\\\ZigBot\\app.db', 'SQLALCHEMY_TRACK_MODIFICATIONS': True}>

"container started" event of a pod from kubernetes using pythons kubernetes library

I've a deployment with one container having postStart hook as shown below
containers:
- name: openvas
image: my-image:test
lifecycle:
postStart:
exec:
command:
- /usr/local/tools/is_service_ready.sh
I'm watching for the events for pods using python's kubernetes library.
when the pod gets deployed, container comes up and postStart script will be executed until postStart script exits successfully. I want to get the event from kubernetes using pythons kubernetes library when CONTAINER comes up.
I tried watching the event, I get the event with status as 'containersReady' only when postStart completes and the POD comes up,it can be seen below.
'status': {'conditions': [{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 25, 3, tzinfo=tzlocal()),
'message': None,
'reason': None,
'status': 'True',
'type': 'Initialized'},
{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 26, 51, tzinfo=tzlocal()),
'message': None,
'reason': None,
'status': 'True',
'type': 'Ready'},
{'last_probe_time': None,
'last_transition_time': None,
'message': None,
'reason': None,
'status': 'True',
'type': 'ContainersReady'},
{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 25, 3, tzinfo=tzlocal()),
'message': None,
'reason': None,
'status': 'True',
'type': 'PodScheduled'}],
'container_statuses': [{'container_id': 'docker://1c39e13dc777a34c38d4194edc23c3668697223746b60276acffe3d62f9f0c44',
'image': 'my-image:test',
'image_id': 'docker://sha256:9903437699d871c1f3af7958a7294fe419ed7b1076cdb8e839687e67501b301b',
'last_state': {'running': None,
'terminated': None,
'waiting': None},
'name': 'samplename',
'ready': True,
'restart_count': 0,
'state': {'running': {'started_at': datetime.datetime(2019, 4, 18, 16, 25, 14, tzinfo=tzlocal())},
'terminated': None,
'waiting': None}}],
and before this I get status 'podScheduled' as 'True'
'status': {'conditions': [{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 25, 3, tzinfo=tzlocal()),
'message': None,
'reason': None,
'status': 'True',
'type': 'Initialized'},
{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 25, 3, tzinfo=tzlocal()),
'message': 'containers with unready status: [openvas]',
'reason': 'ContainersNotReady',
'status': 'False',
'type': 'Ready'},
{'last_probe_time': None,
'last_transition_time': None,
'message': 'containers with unready status: [openvas]',
'reason': 'ContainersNotReady',
'status': 'False',
'type': 'ContainersReady'},
{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 25, 3, tzinfo=tzlocal()),
'message': None,
'reason': None,
'status': 'True',
'type': 'PodScheduled'}],
'container_statuses': [{'container_id': None,
'image': 'ns-openvas:test',
'image_id': '',
'last_state': {'running': None,
'terminated': None,
'waiting': None},
'name': 'openvas',
'ready': False,
'restart_count': 0,
'state': {'running': None,
'terminated': None,
'waiting': {'message': None,
'reason': 'ContainerCreating'}}}],
Anything I can try to get the event when the CONTAINER comes up.
Obviously, with current approach you will never get it working, because, as describe here :
The postStart handler runs asynchronously relative to the Container’s
code, but Kubernetes’ management of the container blocks until the
postStart handler completes. The Container’s status is not set to
RUNNING until the postStart handler completes.
Maybe you should create another pod with is_service_ready.sh script, which will be watching events of the main pod.

Django Celery Beat Import/KeyError Issue

I am using Django 1.6 with Celery. I have this task that I have running as a schedule. Everything looks correct, but I get this import error when the Celery(beat) runs:
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'utc': True, 'chord': None, 'args': [], 'retries': 0, 'expires': None, 'task': 'bot_data.tasks.get_unanswered_threads', 'callbacks': None, 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, 'id': '6143f259-721b-4984-99ae-790c00633271'} (233b)
Traceback (most recent call last):
File "/home/one/.virtualenvs/bot/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 455, in on_task_received
strategies[name](message, body,
KeyError: 'bot_data.tasks.get_unanswered_threads'
base.py:
from datetime import timedelta
CELERYBEAT_SCHEDULE = {
'get-unanswered-threads--every-15-seconds': {
'task': 'bot_data.tasks.get_unanswered_threads',
'schedule': timedelta(seconds=15),
'args': ()
},
}
CELERY_TIMEZONE = 'UTC'
from bot_data/get_unanswered_threads:
#task()
def get_unanswered_threads():
slug = 'forums/threads/unanswered.json?PageSize=100'
thread_batch = []
Nevermind, I had bot_data.tasks.get_unanswered_threads incorrect.
The doc's specifies the the annotation as #shared_task instead of #task.