I have a table in DB named Plan.
see code in models.py:
class Plan(models.Model):
id = models.AutoField(primary_key=True)
Comments = models.CharField(max_length=255)
def __str__(self):
return self.Comments
I want to fetch data(comments) from DB and after that data will be deleted. That means one data will be fetched once. And this data will be shown in the Django template.
I tried, see views.py
def Data(request):
data = Plan.objects.filter(id=6)
# latest_id = Model.objects.all().values_list('id', flat=True).order_by('-id').first()
# Plan.objects.all()[:1].delete()
context = {'data':data}
dataD = Plan.objects.filter(id=6)
dataD.delete()
return render(request,'data.html',context)
this code is deleting data from DB but not showing in the template.
How can i do this?
Your template must be updated because it fetch the data from the db one time only so if db is updated your template wouldn't change
From django docs:
Pickling QuerySets¶
If you pickle a QuerySet, this will force all the results to be loaded into memory prior to pickling. Pickling is usually used as a precursor to caching and when the cached queryset is reloaded, you want the results to already be present and ready for use (reading from the database can take some time, defeating the purpose of caching). This means that when you unpickle a QuerySet, it contains the results at the moment it was pickled, rather than the results that are currently in the database.
If you only want to pickle the necessary information to recreate the QuerySet from the database at a later time, pickle the query attribute of the QuerySet. You can then recreate the original QuerySet (without any results loaded) using some code like this:
>>> import pickle
>>> query = pickle.loads(s) # Assuming 's' is the pickled string.
>>> qs = MyModel.objects.all()
>>> qs.query = query # Restore the original 'query'.
Related
I'm building a Django application, and in it I would like to track whenever a particular model was last accessed.
I'm opting for this in order to build a user activity history.
I know Django provides auto_now and auto_now_add, but these do not do what I want them to do. The latter tracks when a model was created, and the former tracks when it was last modified, which is different from when it was last accessed, mind you.
I've tried adding another datetime field to my model's specification:
accessed_on = models.DateTimeField()
Then I try to update the model's access manually by calling the following after each access:
model.accessed_on = datetime.utcnow()
model.save()
But it still won't work.
I've gone through the django documentation for an answer, but couldn't find one.
Help would be much appreciated.
What about creating a model with a field that contains the last save-date. Plus saving the object every time is translated from the DB representation to the python representation?
class YourModel(models.Model):
date_accessed = models.DateTimeField(auto_now=True)
#classmethod
def from_db(cls, db, field_names, values):
obj = super().from_db(db, field_names, values)
obj.save()
return obj
In Django, I have a User model and a Following model:
class User():
uid = models.UUIDField(primary_key=True)
class Following():
follower_uid = models.ForeignKey(USER_MODEL, related_name="followers")
followed_uid = models.ForeignKey(USER_MODEL, related_name="following")
with corresponding database tables for both object types.
When Django loads a User object, I want to also load the number of followers of that user in the same database query, i.e. using a join. What I don't want to do is load the user first and then do a second query to get the number of followers.
Is this possible using the Django object model or do I have to write raw sql?
I also want to load a second-degree count; that is, the number of followers of the followers of the user. As before, I want this count to be loaded in the same database query as the user itself.
Appreciate specific syntax and examples, I have read a ton of Django documentation and nothing seems to answer this. Thanks!
you can do this query:
from django.db.models import Count
>>> user = User.objects.filter(pk=some_id).annotate(num_followers=Count('followers'))
>>> user
[<User: someuser>]
>>> user[0].id
some_id
>>> user[0].num_followers
123
I have a model (lets call it Entity) that has an attribute (Attribute) that changes over time, but I want to keep a history of how that attribute changes in the database. I need to be able to filter my Entities by the current value of Attribute in its manager. But because Django (as far as I can tell) won't let me do this in one query natively, I have created a database view that produces the latest value of Attribute for every Entity. So my model structure looks something like this:
class Entity(models.Model):
def set_attribute(self, value):
self.attribute_history.create(value=value)
def is_attribute_positive(self, value):
return self.attribute.value > 0
class AttributeEntry(models.Model):
entity = models.ForeignKey(Entity, related_name='attribute_history')
value = models.IntegerField()
time = models.DateTimeField(auto_now_add=True)
class AttributeView(models.Model)
id = models.IntegerField(primary_key=True, db_column='id',
on_delete=models.DO_NOTHING)
entity = models.OneToOneField(Entity, related_name='attribute')
value = models.IntegerField()
time = models.DateTimeField()
class Meta:
managed = False
My database has the view that produces the current attribute, created with SQL like this:
CREATE VIEW myapp_attributeview AS
SELECT h1.*
FROM myapp_attributehistory h1
LEFT OUTER JOIN myapp_attributehistory h2
ON h1.entity_id = h2.entity_id
AND (h1.time < h2.time
OR h1.time = h2.time
AND h1.id < h2.id)
WHERE h2.id IS NULL;
My problem is that if I set the attribute on a model object using set_attribute() checking it with is_attribute_positive() doesn't always work, because Django may be caching that the related AttributeView object. How I can I make Django update its model, at the very least by requerying the view? Can I mark the attribute property as dirty somehow?
PS: the whole reason I'm doing this is so I can do things like Entity.objects.filter(attribute__value__exact=...).filter(...), so if someone knows an easier way to get that functionality, such an answer will be accepted, too!
I understand that the attribute value is modified by another process (maybe not even Django) accessing the same database. If this is not the case you should take a look at django-reversion.
On the other hand if that is the case, you should take a look at second answer of this. It says that commiting transaction invalidate query cache and offer this snippet.
>>> from django.db import transaction
>>> transaction.enter_transaction_management()
>>> transaction.commit() # Whenever you want to see new data
I never directly solved the problem, but I was able to sidestep it by changing is_attribute_positiive() to directly query the database table, instead of the view.
def is_attribute_positive(self, value):
return self.attribute_history.latest().value > 0
So while the view gives me the flexibility of being able to filter queries on Entity, it seems the best thing to do once the object is received is to operate directly on the table-backed model.
I have to model. I want to copy model object from a model to another:
Model2 is copy of Model1 (this models has too many m2m fields)
Model1:
class Profile(models.Model):
user = models.OneToOneField(User)
car = models.ManyToManyField(Car)
job = models.ManyToManyField(Job)
.
.
This is a survey application. I want to save user's profile when he/she attends the survey (because he can edit profile after survey)
I have created another model to save user profile when he takes survey (Im not sure its the right way)
class SurveyProfile(models.Model):
user = models.OneToOneField(SurveyUser) #this is another model that takes survey users
car = models.ManyToManyField(Car)
job = models.ManyToManyField(Job)
How can I copy user profile from Profile to SurveyProfile.
Thanks in advance
deepcopy etc won't work because the classes/Models are different.
If you're certain that SurveyProfile has the all of the fields present in Profile*, this should work (not tested it):
for field in instance_of_model_a._meta.fields:
if field.primary_key == True:
continue # don't want to clone the PK
setattr(instance_of_model_b, field.name, getattr(instance_of_model_a, field.name))
instance_of_model_b.save()
* (in which case, I suggest you make an abstract ProfileBase class and inherit that as a concrete class for Profile and SurveyProfile, but that doesn't affect what I've put above)
I'm having a tough time understanding what you wrote above, consequently I'm not 100% certain if this will work, but what I think I would do is something like this, if I'm understanding you right:
class Model2Form(ModelForm):
class Meta:
model = models.Model2
and then
f = Model2Form(**m1.__dict__)
if f.is_valid():
f.save()
But I think this looks more like poor database design then anything, without seeing the entire model1 I can't be certain. But, in any event, I'm not sure why you want to do that anyway, when you can simply use inheritance at the model level, or something else to get the same behavior.
Here's the function I've been using, it builds on model_to_dict. Model_to_dict just returns the ids of foreign keys + not their instances, so for those I replace them with the model itself.
def update_model(src, dest):
"""
Update one model with the content of another.
When it comes to Foreign Keys, they need to be
encoded using models and not the IDs as
returned from model_to_dict.
:param src: Source model instance.
:param dest: Destination model instance.
"""
src_dict = model_to_dict(src, exclude="id")
for k, v in src_dict.iteritems():
if isinstance(v, long):
m = getattr(src, k, None)
if isinstance(m, models.Model):
setattr(dest, k, m)
continue
setattr(dest, k, v)
This is how I do it (note: this is in Python3, you might need to change things - get rid of the dictionary comprehension - if you are using python 2):
def copy_instance_kwargs(src, exclude_pk=True, excludes=[]):
"""
Generate a copy of a model using model_to_dict, then make sure
that all the FK references are actually proper FK instances.
Basically, we return a set of kwargs that may be used to create
a new instance of the same model - or copy from one model
to another.
The resulting dictionary may be used to create a new instance, like so:
src_dict = copy_instance_kwargs(my_instance)
ModelClass(**src_dict).save()
:param src: Instance to copy
:param exclude_pk: Exclude the PK of the model, to ensure new records are copies.
:param excludes: A list of fields to exclude (must be a mutable iterable) from the copy. (date_modified, for example)
"""
# Exclude the PK of the model, since we probably want to make a copy.
if exclude_pk:
excludes.append(src._meta.pk.attname)
src_dict = model_to_dict(src, exclude=excludes)
fks={k: getattr(src, k) for k in src_dict.keys() if
isinstance(getattr(src, k, None), models.Model) }
src_dict.update(fks)
return src_dict
I came across something similar but I needed to check also if ForeignKey fields have compatible models. I end up with the following method:
def copy_object(obj, model):
kwargs = dict()
for field in model._meta.fields:
if hasattr(obj, field.name) and not field.primary_key:
if field.remote_field is not None:
obj_field = obj._meta.get_field(field.name)
if obj_field.remote_field != field.remote_field:
continue
kwargs[field.name] = getattr(obj, field.name)
return model(**kwargs)
So, if I'm interpreting your problem correctly, you have an old model (Profile), and you're trying to replace it with the new model SurveyProfile. Given the circumstances, you may want to consider using a database migration tool like South in the long run. For now, you can run a script in the Django shell (manage.py shell):
from yourappname.models import *
for profile in Profile.objects.all():
survey_profile = SurveyProfile()
# Assuming SurveyUser has user = ForeignKey(User)...
survey_profile.user = SurveyUser.objects.get(user=profile.user)
survey_profile.car = profile.car
survey_profile.job = profile.job
survey_profile.save()
Using South
If this project needs to be maintained and updated in the long term, I would highly recommend using a database migration package like South, which will let you modify fields on a Model, and migrate your database painlessly.
For example, you suggest that your original model had too many ManyToManyFields present. With South, you:
Delete the fields from the model.
Auto-generate a schema migration.
Apply the migration.
This allows you to reuse all of your old code without changing your model names or mucking with the database.
class dbview(models.Model):
# field definitions omitted for brevity
class Meta:
db_table = 'read_only_view'
def main(request):
result = dbview.objects.all()
Caught an exception while rendering: (1054, "Unknown column 'read_only_view.id' in 'field list'")
There is no primary key I can see in the view. Is there a workaround?
Comment:
I have no control over the view I am accessing with Django. MySQL browser shows columns there but no primary key.
When you say 'I have no control over the view I am accessing with Django. MySQL browser shows columns there but no primary key.'
I assume you mean that this is a legacy table and you are not allowed to add or change columns?
If so and there really isn't a primary key (even a string or non-int column*) then the table hasn't been set up very well and performance might well stink.
It doesn't matter to you though. All you need is a column that is guaranteed to be unique for every row. Set that to be 'primary_key = True in your model and Django will be happy.
There is one other possibility that would be problemmatic. If there is no column that is guaranteed to be unique then the table might be using composite primary keys. That is - it is specifying that two columns taken together will provide a unique primary key. This is perfectly valid relational modelling but unfortunatly unsupported by Django. In that case you can't do much besides raw SQL unless you can get another column added.
I have this issue all the time. I have a view that I can't or don't want to change, but I want to have a page to display composite information (maybe in the admin section). I just override the save and raise a NotImplementedError:
def save(self, **kwargs):
raise NotImplementedError()
(although this is probably not needed in most cases, but it makes me feel a bit better)
I also set managed to False in the Meta class.
class Meta:
managed = False
Then I just pick any field and tag it as the primary key. It doesn't matter if it's really unique with you are just doing filters for displaying information on a page, etc.
Seems to work fine for me. Please commment if there are any problems with this technique that I'm overlooking.
If there really is no primary key in the view, then there is no workaround.
Django requires each model to have exactly one field primary_key=True.
There should have been an auto-generated id field when you ran syncdb (if there is no primary key defined in your model, then Django will insert an AutoField for you).
This error means that Django is asking your database for the id field, but none exists. Can you run django manage.py dbshell and then DESCRIBE read_only_view; and post the result? This will show all of the columns that are in the database.
Alternatively, can you include the model definition you excluded? (and confirm that you haven't altered the model definition since you ran syncdb?)
I know this post is over a decade old, but I ran into this recently and came to SO looking for a good answer. I had to come up with a solution that addresses the OP's original question, and, additionally, allows for us to add new objects to the model for unit testing purposes, which is a problem I still had with all of the provided solutions.
main.py
from django.db import models
def in_unit_test_mode():
"""some code to detect if you're running unit tests with a temp SQLite DB, like..."""
import sys
return "test" in sys.argv
"""You wouldn't want to actually implement it with the import inside here. We have a setting in our django.conf.settings that tests to see if we're running unit tests when the project starts."""
class AbstractReadOnlyModel(models.Model):
class Meta(object):
abstract = True
managed = in_unit_test_mode()
"""This is just to help you fail fast in case a new developer, or future you, doesn't realize this is a database view and not an actual table and tries to update it."""
def save(self, *args, **kwargs):
if not in_unit_test_mode():
raise NotImplementedError(
"This is a read only model. We shouldn't be writing "
"to the {0} table.".format(self.__class__.__name__)
)
else:
super(AbstractReadOnlyModel, self).save(*args, **kwargs)
class DbViewBaseModel(AbstractReadOnlyModel):
not_actually_unique_field = IntegerField(primary_key=True)
# the rest of your field definitions
class Meta:
db_table = 'read_only_view'
if in_unit_test_mode():
class DbView(DbViewBaseModel):
not_actually_unique_field = IntegerField()
"""This line removes the primary key property from the 'not_actually_unique_field' when running unit tests, so Django will create an AutoField named 'id' on the table it creates in the temp DB that it creates for running unit tests."""
else:
class DbView(DbViewBaseModel):
pass
class MainClass(object):
#staticmethod
def main_method(request):
return DbView.objects.all()
test.py
from django.test import TestCase
from main import DbView
from main import MainClass
class TestMain(TestCase):
#classmethod
def setUpTestData(cls):
cls.object_in_view = DbView.objects.create(
"""Enter fields here to create test data you expect to be returned from your method."""
)
def testMain(self):
objects_from_view = MainClass.main_method()
returned_ids = [object.id for object in objects_from_view]
self.assertIn(self.object_in_view.id, returned_ids)