Django q process does not get cleared from memory - django

I have integrated Django Q in my project and i'm running a task from django q but after task ran successfully i can still see the process is in memory is there any way to clear the process from memory after it has finished the job.
Here is the django q settings
Q_CLUSTER = {
'name': 'my_app',
'workers': 8,
'recycle': 500,
'compress': True,
'save_limit': 250,
'queue_limit': 500,
'cpu_affinity': 1,
'label': 'Django Q',
'max_attempts': 1,
'attempt_count': 1,
'catch_up': False,
'redis': {
'host': '127.0.0.1',
'port': 6379,
'db': 0, }}

Related

Celery Django Body Encoding

Hi does anyone know how the body of a celery json is encoded before it is entered in the queue cache (i use Redis in my case).
{'body': 'W1sic2hhd25AdWJ4LnBoIiwge31dLCB7fSwgeyJjYWxsYmFja3MiOiBudWxsLCAiZXJyYmFja3MiOiBudWxsLCAiY2hhaW4iOiBudWxsLCAiY2hvcmQiOiBudWxsfV0=',
'content-encoding': 'utf-8',
'content-type': 'application/json',
'headers': {'lang': 'py',
'task': 'export_users',
'id': '6e506f75-628e-4aa1-9703-c0185c8b3aaa',
'shadow': None,
'eta': None,
'expires': None,
'group': None,
'retries': 0,
'timelimit': [None, None],
'root_id': '6e506f75-628e-4aa1-9703-c0185c8b3aaa',
'parent_id': None,
'argsrepr': "('<email#example.com>', {})",
'kwargsrepr': '{}',
'origin': 'gen187209#ubuntu'},
'properties': {'correlation_id': '6e506f75-628e-4aa1-9703-c0185c8b3aaa',
'reply_to': '403f7314-384a-30a3-a518-65911b7cba5c',
'delivery_mode': 2,
'delivery_info': {'exchange': '', 'routing_key': 'celery'},
'priority': 0,
'body_encoding': 'base64',
'delivery_tag': 'dad6b5d3-c667-473e-a62c-0881a7349684'}}
Just a background I have a nodejs project which needs to trigger my celery (django). Background tasks are all in the django app but the trigger and the details will come from a nodejs app.
Thanks in advance
It may just be simpler to use the nodejs celery client
https://github.com/mher/node-celery/blob/master/celery.js
to invoke a celery task from nodejs.

aws lambda deployed by zappa is not able to connect to remote database

I'm deploying a django project using zappa to aws-lambda and using mongodb atlas as my database.
I'm tring to connect to the database using djongo.
I set my django_setting in the zappa_settings.json to my project's django settings.
The connection to the database with this settings works just fine in localhost. when deploying, it fails to connect to the server and I suspect that it tries to connect to a default local db (the db sent to mongo_client.py isnt valid or something and it needs to connect to default HOST).
The actual error I get is:
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
djongo.sql2mongo.SQLDecodeError: FAILED SQL: SELECT
If anyone has an idea I'd would love to hear.
attaching the settings with some fields unset (but set at my settings)
Django settings (database part):
DATABASES = {
'default': {
'ENGINE': 'djongo',
'NAME': 'db',
'HOST': 'mongodb://<username>:<password>#<>
'USER': 'username',
'PASSWORD': 'password',
}
}
zappa_settings:
{
"dev":
{
"aws_region": "eu-west-1",
"django_settings": settings,
"profile_name": "default",
"project_name": name,
"runtime": "python3.6",
"s3_bucket": bucket,
"timeout_seconds": 900,
"manage_roles": false,
"role_name": name,
"role_arn": arn,
"slim_handler": true
}
}
Try this
'default': {
'ENGINE': 'djongo',
'CLIENT': {
'host': 'mongodb+srv://url',
'username': '<username>',
'password': '<password>',
'name':'<db_name>'
}
}

"container started" event of a pod from kubernetes using pythons kubernetes library

I've a deployment with one container having postStart hook as shown below
containers:
- name: openvas
image: my-image:test
lifecycle:
postStart:
exec:
command:
- /usr/local/tools/is_service_ready.sh
I'm watching for the events for pods using python's kubernetes library.
when the pod gets deployed, container comes up and postStart script will be executed until postStart script exits successfully. I want to get the event from kubernetes using pythons kubernetes library when CONTAINER comes up.
I tried watching the event, I get the event with status as 'containersReady' only when postStart completes and the POD comes up,it can be seen below.
'status': {'conditions': [{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 25, 3, tzinfo=tzlocal()),
'message': None,
'reason': None,
'status': 'True',
'type': 'Initialized'},
{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 26, 51, tzinfo=tzlocal()),
'message': None,
'reason': None,
'status': 'True',
'type': 'Ready'},
{'last_probe_time': None,
'last_transition_time': None,
'message': None,
'reason': None,
'status': 'True',
'type': 'ContainersReady'},
{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 25, 3, tzinfo=tzlocal()),
'message': None,
'reason': None,
'status': 'True',
'type': 'PodScheduled'}],
'container_statuses': [{'container_id': 'docker://1c39e13dc777a34c38d4194edc23c3668697223746b60276acffe3d62f9f0c44',
'image': 'my-image:test',
'image_id': 'docker://sha256:9903437699d871c1f3af7958a7294fe419ed7b1076cdb8e839687e67501b301b',
'last_state': {'running': None,
'terminated': None,
'waiting': None},
'name': 'samplename',
'ready': True,
'restart_count': 0,
'state': {'running': {'started_at': datetime.datetime(2019, 4, 18, 16, 25, 14, tzinfo=tzlocal())},
'terminated': None,
'waiting': None}}],
and before this I get status 'podScheduled' as 'True'
'status': {'conditions': [{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 25, 3, tzinfo=tzlocal()),
'message': None,
'reason': None,
'status': 'True',
'type': 'Initialized'},
{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 25, 3, tzinfo=tzlocal()),
'message': 'containers with unready status: [openvas]',
'reason': 'ContainersNotReady',
'status': 'False',
'type': 'Ready'},
{'last_probe_time': None,
'last_transition_time': None,
'message': 'containers with unready status: [openvas]',
'reason': 'ContainersNotReady',
'status': 'False',
'type': 'ContainersReady'},
{'last_probe_time': None,
'last_transition_time': datetime.datetime(2019, 4, 18, 16, 25, 3, tzinfo=tzlocal()),
'message': None,
'reason': None,
'status': 'True',
'type': 'PodScheduled'}],
'container_statuses': [{'container_id': None,
'image': 'ns-openvas:test',
'image_id': '',
'last_state': {'running': None,
'terminated': None,
'waiting': None},
'name': 'openvas',
'ready': False,
'restart_count': 0,
'state': {'running': None,
'terminated': None,
'waiting': {'message': None,
'reason': 'ContainerCreating'}}}],
Anything I can try to get the event when the CONTAINER comes up.
Obviously, with current approach you will never get it working, because, as describe here :
The postStart handler runs asynchronously relative to the Container’s
code, but Kubernetes’ management of the container blocks until the
postStart handler completes. The Container’s status is not set to
RUNNING until the postStart handler completes.
Maybe you should create another pod with is_service_ready.sh script, which will be watching events of the main pod.

How to connect Django Rest-api with MongoDB?

I'm trying to connect Django rest-api with mongo database which i created on mlab.com. Below is my code which I define in settings.py file in my Django rest-api.
MONGODB_DATABASES = {
'default': {
'NAME': 'dummy',
'HOST': os.environ.get('MONGO_HOST',
'mongodb://dummyuser:dummypassword#ds125851.mlab.com:25851/dummy'),
}
}
mongoengine.connection(
db='dummy',
host=os.environ.get('MONGO_HOST',
'mongodb://dummyuser:dummypassword#ds125851.mlab.com:25851/dummy'),
)
When I run this api I got this error
host=os.environ.get('MONGO_HOST', 'mongodb://dummyuser:dummypassword#ds125851.mlab.com
:25851/dummy'),
typeError: 'module' object is not callable
I tried to search for solutions online but I found examples which were for older versions. I'm using Djangorestframework2.0.7, MongoDB3.4 and mongoengine0.15. I couldn't find any answer for this versions. I tried to connect this api to the local database and I got same error. How can I solve it?
I have been successfully connected django rest-api with mongodb. Here is the solution that works for me.
DATABASES = {
'default': {
'ENGINE': 'djongo',
'NAME': 'dummy',
'HOST': 'localhost',
}
}
MONGODB_DATABASES = {
'db': 'dummy',
'host': 'localhost',
'port': 27017,
}
Here is the link for more information.
http://blog.tomjohnhall.com/python-3-6-django-2-0-and-mongodb-3-4-3-6/
You can try these steps to connect your django 2.0 or more with MongoDB database:
1) Install mongoengine for django 2.0
pip install -e git+https://github.com/MongoEngine/django-mongoengine.git#egg=django-mongoengine
2)Add these in your settings file:
from mongoengine import *
'django_mongoengine', // Add this line to installed app
MONGODB_DATABASES = {
"default": {
"name": '<db_name>',
"host": 'localhost',
"password": '',
"username": '',
"tz_aware": True, # if you using timezones in django (USE_TZ = True)
},
}
You can find the details for querying the database here

django unit testing on multiple databases

I'm working on a django project where all my unit test cases were working perfectly.
Ass soon as I introduced a second database all my test cases that inherit from TestCase are broken. At this stage I haven't build any test case for that second database but my router is working fine.
When I run the tests I get the error,
"KeyError: 'SUPPORTS_TRANSACTIONS'"
It appears to me that is trying to check that that all the databases that I've got setup support transactions but the second database is never created.
Any ideas on how to have the test script to build the second database.
I realise this is quite an old thread, but I ran into it with the same issue, and my resolve was adding the multi_db = True flag to my testcase, e.g:
class TestThingWithMultipleDatabases(TestCase):
multi_db = True
def test_thing(self):
pass
Source https://github.com/django/django/blob/master/django/test/testcases.py#L861
This causes django to call flush on all databases (or rollback if they support transactions)
I too am using a db router
I'm afraid I cant find this in Django's documentation, so no link for that
yes I had a similar problem... my fix was to set 'SUPPORTS_TRANSACTIONS': True for each of the database connections in the settings file. Not sure if this is the correct way to fix it, but it worked for me.
'SUPPORTS_TRANSACTIONS':True worked for me too.
However I have a kind of weird multiple db setup using database routers.
#user298404: how does your multiple db setup look like?
ps. sorry; not enough points for comment...
Here is a multiple db setup that I currently have in production:
DATABASES = {
# 'default' is used as the WRITE (master) connection
DB_PRIMARY_MASTER: {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'main',
'USER': 'main_write',
'PASSWORD': 'XXXX',
'HOST': 'db-master',
'PORT': '3306',
'SUPPORTS_TRANSACTIONS': True,
},
# Slave connections are READONLY
DB_PRIMARY_SLAVE: {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'main',
'USER': 'main_read',
'PASSWORD': 'XXXX',
'HOST': 'db-slave',
'PORT': '3306',
'TEST_MIRROR': DB_PRIMARY_MASTER,
'SUPPORTS_TRANSACTIONS': True,
},
# 'mail_default' is used as the WRITE (master) connection for the mail database
DB_MAIL_MASTER: {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'dbmail',
'USER': 'dbmail_write',
'PASSWORD': 'XXXX',
'HOST': 'db-mail-master',
'PORT': '3306',
'SUPPORTS_TRANSACTIONS': True,
},
# Slave connections are READONLY
DB_MAIL_SLAVE: {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'dbmail',
'USER': 'dbmail_read',
'PASSWORD': 'XXXX',
'HOST': 'db-mail-slave',
'PORT': '3306',
'TEST_MIRROR': DB_MAIL_MASTER,
'SUPPORTS_TRANSACTIONS': True,
},
}
DB_PRIMARY_MASTER, DB_PRIMARY_SLAVE, DB_MAIL_MASTER, and DB_MAIL_SLAVE are all string constants so that they can be used in my database router.
Hint: DB_PRIMARY_MASTER='default'
I hope this helps!
Referring to that link
Django doc Multi-Db
you can:
from django.test import TransactionTestCase
class TestMyViews(TransactionTestCase):
databases = {'default', 'other'} # {'__all__'} shold work too
def test_index_page_view(self):
call_some_test_code()
thanks to
#sih4sing5hog5