I am using djongo as my backend database connector engine. The decision to use Django with MongoDB as well as the choice of using Djongo predates me tenure with the team.
I am trying to improve the efficiency of a search result for which I want to know the exact "find" query that is being run on MongoDB. But I can't seem to find a way to do it. Is there a way I can see the exact query that djongo is running under the hood?
Follow up: I cannot write the exact search query here but it looks something like this
queryset = (
ModelName.objects.filter(
Q(attr1__icontains=search_term) |
Q(foreign_key1__attr__icontains=search_term)
)
.distinct()
.order_by("-id")
).select_related(
'foreign_key1__attr'
).values(
'attr1',
'foreign_key1__attr'
)
I am a bit confused. Does Foreign Key even make sense if my backend is MongoDb? Is this shoddy DB design or does djongo implement some foreign key constraints at the middleware layer?
Related
I'm trying to integrate this RAW query into a django ORM query but I'm facing problems to apply the raw query and the orm query
The original query which works fine with postgres querytools:
"SELECT SUM(counter),type, manufacturer
FROM sells GROUP BY manufacturer, type"
Now I tried to integrate this into a django-orm query like this:
res_postgres = Sells.objects.all().values('manufacturer','type','counter').aggregate(cnter=Sum('counter'))
But what I get is just a the counter cnter ...
What I need is the result from the Raw query which looks like this
What I also tried is to use values and field names. Like Sells.objects.values('manufacturer'....).aggregate(cnter=Sum(counter))
But then django is building a query which integrates a GROUP BY id. Which is not what I need. I need an aggregation of the entire data not the object level while keeping the information of the other fields.
When I use Cars.objects.raw() it asks me about primary keys, which I also don't need.
Any hints here? is that possible with Django ORM at all?
Use annotate(...) instead of aggregate()
res_postgres = Sells.objects.values('manufacturer','type').annotate(cnter=Sum('counter'))
I have a Django and Django REST Framework powered RESTful API (talking to a PostgreSQL DB backend) which supports filtering on a specific model.
Now I want to add a fulltext search functionality.
Is it be possible to use Elasticsearch for fulltext search and then apply my existing API filters on top of these search results?
I would suggest you consider using PostgreSQL only to do what you asked for.
In my opinion it is the best solution because you will have the data and the search indexes directly inside PostgreSQL and you will not be forced to install and maintain additional software (such as Elasticsearch) and keep the data and indexes in sync.
This is the simplest code example you can have to perform a full-text search in Django with PostgreSQL:
Entry.objects.filter(body_text__search='Cheese')
For all the basic documentation on using the full-text search in Django with PostgreSQL you can use the official documentation: "Full text search"
If you want to deepen further you can read an article that I wrote on the subject:
"Full-Text Search in Django with PostgreSQL"
Your question is too broad to be answered with code, but it's definitely possible.
You can easily search your elasticsearch for rows matching your full-text criteria.
Then get those rows' PK fields (or any other candidate key, used to uniquely identify rows in your PostgreSQL dB), and filter your django ORM-backed models for PKs matching those you found from your Elasticsearch.
Pseudocode would be:
def get_chunk(l, length):
for i in xrange(0, len(l), length):
yield l[i:i + length]
res = es.search(index="index", body={"query": {"match": ...}})
pks = []
for hit in res['hits']:
pks.append(hit['pk'])
for chunk_10k in get_chunk(pks, 10000):
DjangoModel.objects.filter(pk__in=chunk_10k, **the_rest_of_your_api_filters)
EDIT
To resolve a case in which lots and lots of PKs might be found with your elastic query, you can define a generator that yields successive 10K rows of the results, so you won't step over your DB query limit and to ensure best update performance. I've defined it above with a function called get_chunk.
Something like that would work for alternatives like redis, mongodb, etc ...
I will add a procedure to a django app where I need to store data but only for a few hours, also I don't wat to add another table to my db schema (which is kind of big), I'm thinking to use redis for the task, in the end what I want to achieve is to have a Transfer model, and I want this model always be using another database for its CRUD operations.
Example:
Transfer.objects.all() # Always be the same as Transfer.objects.using('redis').all()
OtherModel.objects.all() # Always use default DB
# Same for save
transfer_instance.save() # Always translate to transfer_instance.save(using='redis')
other_instance.save() # Work as usuall using default DB
How can I achieve this? I don't mind using obscure trickery as long as it works.
Thanks!
You will need to use a Database Router to achieve what you need.
Here is the official documentation for Using Database Routers
I am currently developing a server using Flask/SqlAlchemy. It occurs that when an ORM model is not present as a table in the database, it is created by default by SqlAlchemy.
However when an ORM class is changed with for instance an extra column is added, these changes do not get saved in the database. So the extra column will be missing, every time I query. I have to adjust my DB manually every time there is a change in the models that I use.
Is there a better way to apply changes in the models during development? I hardly think manual MySql manipulation is the best solution.
you can proceed as the following:
new_column = Column('new_column', String, default='some_default_value')
new_column.create(my_table, populate_default=True)
you can find more details about sqlalchemy migration in: https://sqlalchemy-migrate.readthedocs.org/en/latest/changeset.html
I am using Django(Django-nonrel) and have models.py that contains all my tables. I would like to create index on some of the tables but I am not able to find documentation that explains how to create indices. I would expect that I can declare the index in models.py and execution of syncdb would create it on the database.
Any help is much appreciated!
Thanks!
Django-nonrel doesn't interact with mongodb on its own; you need
the mongodb python driver
(pymongo) and/or an
object-document mapper (ODM) such as
mongoengine or
django-mongodb-engine
to interact with the database. It's the job of the ODM and driver to
create the indexes, and it depends on what you're using to interact
with mongodb as far as what syntax you need to use to create indexes.
You should see the relevant documentation for creating indexes in
pymongo,
django-mongodb-engine,
or mongoengine.