I'm currently using Django to code a webhook application for google Dialogflow.
It was working fine, I was basically done.
For some reason, I now started encountering various randomly appearing problems, one of the worst being the following:
Whenever the webhook executes the user account creation call, it creates a double-database entry, which crashes the program (because my .get suddenly returns multiple elements instead of a single user).
I'm using the following simple models:
# model to create user entries
class TestUser(models.Model):
name = models.CharField(max_length=200)
userID = models.CharField(max_length=12, blank=True)
registrationDate = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.name
# model to add watched movies to user
class Movie(models.Model):
username = models.ForeignKey(TestUser, on_delete=models.CASCADE)
title = models.CharField(max_length=200, blank=True)
genreID = models.IntegerField (blank=True)
def __str__(self):
return self.title
def list_genre_id(self):
return self.genreID
The part that gets executed in my webhook while the problem occurs should be the following:
if action == "account_creation":
selection = req.get("queryResult").get("parameters").get("account_selection")
if selection == "True":
q = TestUser(name=f"{username}", userID=idgen())
q.save()
userID = TestUser.objects.get(name=f"{username}").userID
fullfillmenttext = {"fulfillment_text": "Alright, I created a new account for you! Would you like to add "
"some of your favorite movies to your account?",
"outputContexts": [
{
"name": f"projects/nextflix-d48b9/agent/sessions/{userID}/contexts/create_add_movies",
"lifespanCount": 1,
"parameters": {
"data": "{}"
}
}]}
This is the simple idgen function I'm using:
def idgen():
y = ''.join(
random.SystemRandom().choice(string.ascii_uppercase + string.ascii_lowercase + string.digits) for _ in range(12))
return y
I'm trying to create this userID as a way of having a unique session ID in the webhook calls for all users. Something seems to mess it up, but I don't have the slightest clue what.
Thank you so much for looking over this!
It seems I was able to fix the issue:
The problem was apparently that I had the lifespan of a previous outputContext set to 2 instead of 1, which resulted in the answer executing the codecell twice for some reason. Man, dialogflow is such a terrible program.
Related
I want to implement this thing, Like Facebook showing ads when we are scrolling posts, after 4th or 5th post we see some ads, how can I do that please can someone tell me? If any video link is available please share
Between two post like this
I am showing all the post by for loop, if any nested for loops available please share how can I loop two different model in single loop , showing 1st item of one model after 5th item of another model loop
**I am not using google ad sense, you can imagine its like I will create my own model , maybe same post model where is_ads=True be a field like that.
This is my post model-
class Post(models.Model):
postuuid = models.UUIDField(default=uuid.uuid4,unique=True,editable=False)
user = models.ForeignKey(settings.AUTH_USER_MODEL,on_delete=models.CASCADE, null=True)
title = models.CharField(max_length=150,blank=False)
text = models.TextField(null=True,blank=False)
image = models.ImageField(upload_to='post_images/',null=True,blank=True,default="")
created_at = models.DateTimeField(auto_now_add=True, null=True)
likes = models.ManyToManyField(User, blank=True, related_name="post_like")
tag= models.CharField(max_length=150,blank=True)
post_url=models.URLField(max_length=150,blank=True)
video = models.FileField(upload_to='post_videos/',null=True,blank=True,default="")
# community = models.ForeignKey(communities,on_delete=models.CASCADE)
def __str__(self):
return self.title
Looking at this, I think your best option would be to create a view function that creates two querysets and merges them together into a single one that you can pass to the context. It would be something like this:
# settings.py
AD_POST_FREQUENCY = 5 # Use this to control the frequency of ad posts appearing in the list of posts.
# views.py
from settings import AD_POST_FREQUENCY
def post_list(request):
ad_posts = Post.objects.filter(is_ad=True)
non_ad_posts = Post.objects.filter(is_ad=False)
posts = []
non_ad_post_section_count = 0
for i, post in enumerate(non_ad_posts):
posts.append(post)
if i % AD_POST_FREQUENCY == 0:
ad_post = ad_posts[non_ad_post_section_count]
posts.append(ad_post)
non_ad_post_section_count += 1
context = {
'posts': posts,
}
return render(request, 'your_template.html', context=context)
Using this approach, you would be able to essentially create a single list of posts to return to your template. Then you would only need to loop through the list in the template to display all the posts.
I haven't tried this myself but hopefully, it helps you.
I have two models that I'm relating using Django's OneToOneField, following this documentation: https://docs.djangoproject.com/en/2.0/topics/db/examples/one_to_one/
class Seats(models.Model):
north = models.OneToOneField('User',on_delete=models.CASCADE,related_name='north', default=None, null=True)
bridgetable = models.OneToOneField('BridgeTable',on_delete=models.CASCADE, default=None, null=True)
class BridgeTableManager(models.Manager):
def create_deal(self):
deal = construct_deal()
table = self.create(deal=deal)
s = Seats(bridgetable=table)
s.save()
return table
class BridgeTable(models.Model):
deal = DealField(default=None,null=True)
When I run this code I can successfully get the relationship working
table = BridgeTable.objects.get(pk='1')
user = User.objects.get(username=username)
table.seats.north = user
table.seats.north.save()
print(table.seats.north)
The print statement prints out the name of the player sitting north. But if I try to access the table again like this:
table = BridgeTable.objects.get(pk='1')
print(table.seats.north)
I get "None" instead of the user's name. Is there something I'm missing, like a save that I missed or some concept I'm not understanding? Thanks.
You should save Seats model object that is table.seats.save()
Try print table.seats.north
While table.seats.north.save() runs save on User object
Here are correct steps:
table = BridgeTable.objects.get(pk='1')
user = User.objects.get(username=username)
table.seats.north = user
table.seats.save()
print(table.seats.north)
I have devised a local network website using the Django framework and recently ran in problems I was not having until recently.
We are running experiments on a local network collecting various measurements and I set up this website to make sure we are collecting all the data in the same place.
I set up a PostGreSQL database and use django to populate it on the fly as I receive measurements. The script that does that looks like:
**ladrLogger.py**
#various imports
import django
from django.db import IntegrityError
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
django.setup()
from logger.models import Measurement, Device, Type , Room, Experiment, ExperimentData
def logDevice(self,port):
# Callback function executed each time I receive data to log it in the database
deviceData = port.data # get the data
# Do a bunch of tests and checks
# ....
# Get all measurement to add to the database
# measurements is a list of measurement as defined in my django models
measurements = self.prepareMeasurement(...)
self.saveMeasurements(measurements)
print "Saved measurements successfully."
def saveMeasurements(self,meas):
if not meas:
return
elif type(meas) is list:
for m in meas:
self.saveMeasurements(m)
elif type(meas) is Measurement:
try:
meas.save()
except IntegrityError as e:
if 'unique constraint' in e.message:
print "Skipping... Measurement already existed for device " + meas.device.name
else:
print "Skipping measurement due to error: " + e.message
def prepareMeasurement(self,nameDevice, typeDevice, time, data):
### Takes the characteristics of measurement (device, name and type) and creates the appropriate measurements.
measurements = []
m = Measurement()
m.device = Device.objects.get(name=nameDevice)
m.date = time
# Bunch of tests
# .....
for idv,v in enumerate(value):
if v in data:
m = Measurement()
m.device = something
m.date = something else
m.value = bla
m.quantity = blabla
measurements.append(m)
return measurements
# Bunch of other methods
Note that this script is always running and waiting for more measurements to execute the logDevice callback.
EDIT: A custom based library based on YARP takes care of the callback handling. Code to create the callbacks looks like this:
portid = self.createPort(quer.group(1),True,True) #creates a port
pyarp.connect(desc[0], self.fullPortPath(portid)) #establishes connection to talking port
self.listenToPort(portid, lambda port: self.logDevice(port)) #tells him to execute that callback when he receives messages'
Callbacks are entirely dealt with in the background.
On the other hand, I have my django website that has various views displaying devices, measurements, plotting and whatnot.
The problem I have is that I am logging my measurements (about a few(2-3) per second at some times, usually less) and I can see that logging seems to be fine. But when I am calling my views, for example asking for the latest measurement for device x, I get an old measurement. One example of code:
def latestTemp(request,device_id):
# Creates a csv file with the latest temperature measured
#### for now does not check what measurements are actually available
dev = get_object_or_404(Device, pk=device_id)
tz = pytz.timezone('Europe/Zurich')
# Create the HttpResponse object with the appropriate CSV header.
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="%s.csv"' %dev.name
#get measurement
lastMeas = Measurement.objects.filter(device=dev, quantity=Type.objects.get(quantity='Temperature')).latest('date')
writer = csv.writer(response)
# Form list of required timesteps
date = lastMeas.date.astimezone(tz)
writer.writerow([date.strftime('%Y-%m-%d'),date.strftime('%H:%M'),lastMeas.value])
return response
EDIT (more precisions):
I have been logging data for a few hours, but the website only shows me something dating back a few hours. As I keep asking for that data, it gets more and more recent, as if it had been buffered somewhere and was now getting slowly is place and visible by the website, until everything finally comes back to normal. On the other hand, if I kill the logging process, the data seems lost for ever. What is strange though is that the logDevice method completes and I can see that the meas.save() commands were executed. I also tried to add a listener for the Django signal post.save and I catch them correctly.
Few information:
- I am using the postgresql backend
- I am running all of this on a dedicated Mac machine.
- let me know whatever else would be useful to know
My questions are:
- Do you se any reason that might happen (it used to not happen so I guess it might have to do with the database becoming big, 4Gb right now)
- As a side question, but maybe related, I suspect the way I am pushing new elements in the database is not really nice, since the code runs completely independently from the django website itself. Any suggestions on how to improve ? I thought the ladrLogger code could send a request to a dedicated view that create the new element but that might be heavier for no purpose.
EDIT: adding my models.py
class Room(models.Model):
fullName = models.CharField(max_length=20, unique=True)
shortName = models.CharField(max_length=5, unique=True, default = "000")
nickName = models.CharField(max_length=20, default="Random Room")
def __unicode__(self):
return self.fullName
class Type(models.Model):
quantity = models.CharField(max_length=100, default="Temperature", unique = True)
unit = models.CharField(max_length=5, default="C", blank=True)
VALUE_TYPES = (
('float', 'float'),
('boolean', 'boolean'),
('integer', 'integer'),
('string', 'string'),
)
value_type = models.CharField(max_length=20, choices=VALUE_TYPES, default = "float")
def __unicode__(self):
return self.quantity
class Device(models.Model):
name = models.CharField(max_length=30, default="Unidentified Device",unique=True)
room = models.ForeignKey(Room)
description = models.CharField(max_length=500, default="", blank=True,)
indigoId = models.CharField(max_length=30,default="000")
def __unicode__(self):
#r = Room.objects.get(pk = self.room)
return self.name #+ ' in room ' + r.name
def latestMeasurement(self,*args):
if len(args)==0:
#No argument so just return latest argument
meas = Measurement.objects.filter(device=self).latest('date')
else:
#Use first argument as the type
meas = Measurement.objects.filter(device=self, quantity=args[0]).latest('date')
if not meas:
return None
else:
return meas
def typeList(self):
return Type.objects.filter(measurement__device=self).distinct()
class Measurement(models.Model):
device = models.ForeignKey(Device)
date = models.DateTimeField(db_index=True)
value = models.CharField(max_length=100,default="")
quantity = models.ForeignKey(Type)
class Meta:
unique_together = ('date','device','quantity',)
index_together = ['date', 'device']
def __unicode__(self):
t = self.quantity
return str(self.value) + " " + self.quantity.unit
# return str(self.value)
I am running into a database issue in my unit tests. I think it has something to do with the way I am using TestCase and setUpData.
When I try to set up my test data with certain values, the tests throw the following error:
django.db.utils.IntegrityError: duplicate key value violates unique constraint
...
psycopg2.IntegrityError: duplicate key value violates unique constraint "InventoryLogs_productgroup_product_name_48ec6f8d_uniq"
DETAIL: Key (product_name)=(Almonds) already exists.
I changed all of my primary keys and it seems to be running fine. It doesn't seem to affect any of the tests.
However, I'm concerned that I am doing something wrong. When it first happened, I reversed about an hour's worth of work on my app (not that much code for a noob), which corrected the problem.
Then when I wrote the changes back in, the same issue presented itself again. TestCase is pasted below. The issue seems to occur after I add the sortrecord items, but corresponds with the items above it.
I don't want to keep going through and changing primary keys and urls in my tests, so if anyone sees something wrong with the way I am using this, please help me out. Thanks!
TestCase
class DetailsPageTest(TestCase):
#classmethod
def setUpTestData(cls):
cls.product1 = ProductGroup.objects.create(
product_name="Almonds"
)
cls.variety1 = Variety.objects.create(
product_group = cls.product1,
variety_name = "non pareil",
husked = False,
finished = False,
)
cls.supplier1 = Supplier.objects.create(
company_name = "Acme",
company_location = "Acme Acres",
contact_info = "Call me!"
)
cls.shipment1 = Purchase.objects.create(
tag=9,
shipment_id=9999,
supplier_id = cls.supplier1,
purchase_date='2015-01-09',
purchase_price=9.99,
product_name=cls.variety1,
pieces=99,
kgs=999,
crackout_estimate=99.9
)
cls.shipment2 = Purchase.objects.create(
tag=8,
shipment_id=8888,
supplier_id=cls.supplier1,
purchase_date='2015-01-08',
purchase_price=8.88,
product_name=cls.variety1,
pieces=88,
kgs=888,
crackout_estimate=88.8
)
cls.shipment3 = Purchase.objects.create(
tag=7,
shipment_id=7777,
supplier_id=cls.supplier1,
purchase_date='2014-01-07',
purchase_price=7.77,
product_name=cls.variety1,
pieces=77,
kgs=777,
crackout_estimate=77.7
)
cls.sortrecord1 = SortingRecords.objects.create(
tag=cls.shipment1,
date="2015-02-05",
bags_sorted=20,
turnout=199,
)
cls.sortrecord2 = SortingRecords.objects.create(
tag=cls.shipment1,
date="2015-02-07",
bags_sorted=40,
turnout=399,
)
cls.sortrecord3 = SortingRecords.objects.create(
tag=cls.shipment1,
date='2015-02-09',
bags_sorted=30,
turnout=299,
)
Models
from datetime import datetime
from django.db import models
from django.db.models import Q
class ProductGroup(models.Model):
product_name = models.CharField(max_length=140, primary_key=True)
def __str__(self):
return self.product_name
class Meta:
verbose_name = "Product"
class Supplier(models.Model):
company_name = models.CharField(max_length=45)
company_location = models.CharField(max_length=45)
contact_info = models.CharField(max_length=256)
class Meta:
ordering = ["company_name"]
def __str__(self):
return self.company_name
class Variety(models.Model):
product_group = models.ForeignKey(ProductGroup)
variety_name = models.CharField(max_length=140)
husked = models.BooleanField()
finished = models.BooleanField()
description = models.CharField(max_length=500, blank=True)
class Meta:
ordering = ["product_group_id"]
verbose_name_plural = "Varieties"
def __str__(self):
return self.variety_name
class PurchaseYears(models.Manager):
def purchase_years_list(self):
unique_years = Purchase.objects.dates('purchase_date', 'year')
results_list = []
for p in unique_years:
results_list.append(p.year)
return results_list
class Purchase(models.Model):
tag = models.IntegerField(primary_key=True)
product_name = models.ForeignKey(Variety, related_name='purchases')
shipment_id = models.CharField(max_length=24)
supplier_id = models.ForeignKey(Supplier)
purchase_date = models.DateField()
estimated_delivery = models.DateField(null=True, blank=True)
purchase_price = models.DecimalField(max_digits=6, decimal_places=3)
pieces = models.IntegerField()
kgs = models.IntegerField()
crackout_estimate = models.DecimalField(max_digits=6,decimal_places=3, null=True)
crackout_actual = models.DecimalField(max_digits=6,decimal_places=3, null=True)
objects = models.Manager()
purchase_years = PurchaseYears()
# Keep manager as "objects" in case admin, etc. needs it. Filter can be called like so:
# Purchase.objects.purchase_years_list()
# Managers in docs: https://docs.djangoproject.com/en/1.8/intro/tutorial01/
class Meta:
ordering = ["purchase_date"]
def __str__(self):
return self.shipment_id
def _weight_conversion(self):
return round(self.kgs * 2.20462)
lbs = property(_weight_conversion)
class SortingModelsBagsCalulator(models.Manager):
def total_sorted(self, record_date, current_set):
sorted = [SortingRecords['bags_sorted'] for SortingRecords in current_set if
SortingRecords['date'] <= record_date]
return sum(sorted)
class SortingRecords(models.Model):
tag = models.ForeignKey(Purchase, related_name='sorting_record')
date = models.DateField()
bags_sorted = models.IntegerField()
turnout = models.IntegerField()
objects = models.Manager()
def __str__(self):
return "%s [%s]" % (self.date, self.tag.tag)
class Meta:
ordering = ["date"]
verbose_name_plural = "Sorting Records"
def _calculate_kgs_sorted(self):
kg_per_bag = self.tag.kgs / self.tag.pieces
kgs_sorted = kg_per_bag * self.bags_sorted
return (round(kgs_sorted, 2))
kgs_sorted = property(_calculate_kgs_sorted)
def _byproduct(self):
waste = self.kgs_sorted - self.turnout
return (round(waste, 2))
byproduct = property(_byproduct)
def _bags_remaining(self):
current_set = SortingRecords.objects.values().filter(~Q(id=self.id), tag=self.tag)
sorted = [SortingRecords['bags_sorted'] for SortingRecords in current_set if
SortingRecords['date'] <= self.date]
remaining = self.tag.pieces - sum(sorted) - self.bags_sorted
return remaining
bags_remaining = property(_bags_remaining)
EDIT
It also fails with integers, like so.
django.db.utils.IntegrityError: duplicate key value violates unique constraint "InventoryLogs_purchase_pkey"
DETAIL: Key (tag)=(9) already exists.
UDPATE
So I should have mentioned this earlier, but I completely forgot. I have two unit test files that use the same data. Just for kicks, I matched a primary key in both instances of setUpTestData() to a different value and sure enough, I got the same error.
These two setups were working fine side-by-side before I added more data to one of them. Now, it appears that they need different values. I guess you can only get away with using repeat data for so long.
I continued to get this error without having any duplicate data but I was able to resolve the issue by initializing the object and calling the save() method rather than creating the object via Model.objects.create()
In other words, I did this:
#classmethod
def setUpTestData(cls):
cls.person = Person(first_name="Jane", last_name="Doe")
cls.person.save()
Instead of this:
#classmethod
def setUpTestData(cls):
cls.person = Person.objects.create(first_name="Jane", last_name="Doe")
I've been running into this issue sporadically for months now. I believe I just figured out the root cause and a couple solutions.
Summary
For whatever reason, it seems like the Django test case base classes aren't removing the database records created by let's just call it TestCase1 before running TestCase2. Which, in TestCase2 when it tries to create records in the database using the same IDs as TestCase1 the database raises a DuplicateKey exception because those IDs already exists in the database. And even saying the magic word "please" won't help with database duplicate key errors.
Good news is, there are multiple ways to solve this problem! Here are a couple...
Solution 1
Make sure if you are overriding the class method tearDownClass that you call super().tearDownClass(). If you override tearDownClass() without calling its super, it will in turn never call TransactionTestCase._post_teardown() nor TransactionTestCase._fixture_teardown(). Quoting from the doc string in TransactionTestCase._post_teardown()`:
def _post_teardown(self):
"""
Perform post-test things:
* Flush the contents of the database to leave a clean slate. If the
class has an 'available_apps' attribute, don't fire post_migrate.
* Force-close the connection so the next test gets a clean cursor.
"""
If TestCase.tearDownClass() is not called via super() then the database is not reset in between test cases and you will get the dreaded duplicate key exception.
Solution 2
Override TransactionTestCase and set the class variable serialized_rollback = True, like this:
class MyTestCase(TransactionTestCase):
fixtures = ['test-data.json']
serialized_rollback = True
def test_name_goes_here(self):
pass
Quoting from the source:
class TransactionTestCase(SimpleTestCase):
...
# If transactions aren't available, Django will serialize the database
# contents into a fixture during setup and flush and reload them
# during teardown (as flush does not restore data from migrations).
# This can be slow; this flag allows enabling on a per-case basis.
serialized_rollback = False
When serialized_rollback is set to True, Django test runner rolls back any transactions inserted into the database beween test cases. And batta bing, batta bang... no more duplicate key errors!
Conclusion
There are probably many more ways to implement a solution for the OP's issue, but these two should work nicely. Would definitely love to have more solutions added by others for clarity sake and a deeper understanding of the underlying Django test case base classes. Phew, say that last line real fast three times and you could win a pony!
The log you provided states DETAIL: Key (product_name)=(Almonds) already exists. Did you verify in your db?
To prevent such errors in the future, you should prefix all your test data string by test_
I discovered the issue, as noted at the bottom of the question.
From what I can tell, the database didn't like me using duplicate data in the setUpTestData() methods of two different tests. Changing the primary key values in the second test corrected the problem.
I think the problem here is that you had a tearDownClass method in your TestCase without the call to super method.
In this way the django TestCase lost the transactional functionalities behind the setUpTestData so it doesn't clean your test db after a TestCase is finished.
Check warning in django docs here:
https://docs.djangoproject.com/en/1.10/topics/testing/tools/#django.test.SimpleTestCase.allow_database_queries
I had similar problem that had been caused by providing the primary key value to a test case explicitly.
As discussed in the Django documentation, manually assigning a value to an auto-incrementing field doesn’t update the field’s sequence, which might later cause a conflict.
I have solved it by altering the sequence manually:
from django.db import connection
class MyTestCase(TestCase):
#classmethod
def setUpTestData(cls):
Model.objects.create(id=1)
with connection.cursor() as c:
c.execute(
"""
ALTER SEQUENCE "app_model_id_seq" RESTART WITH 2;
"""
)
This is a hard question for me to describe, but I will do my best here.
I have a model that is for a calendar event:
class Event(models.Model):
account = models.ForeignKey(Account, related_name="event_account")
location = models.ForeignKey(Location, related_name="event_location")
patient = models.ManyToManyField(Patient)
datetime_start = models.DateTimeField()
datetime_end = models.DateTimeField()
last_update = models.DateTimeField(auto_now=False, auto_now_add=False, null=True, blank=True)
event_series = models.ForeignKey(EventSeries, related_name="event_series", null=True, blank=True)
is_original_event = models.BooleanField(default=True)
When this is saved I am overriding the save() method to check and see if the event_series (recurring events) is set. If it is, then I need to iteratively create another event object for each recurring date.
The following seems to work, though it may not be the best approach:
def save(self, *args, **kwargs):
if self.pk is None:
if self.event_series is not None and self.is_original_event is True :
recurrence_rules = EventSeries.objects.get(pk=self.event_series.pk)
rr_freq = DAILY
if recurrence_rules.frequency == "DAILY":
rr_freq = DAILY
elif recurrence_rules.frequency == "WEEKLY":
rr_freq = WEEKLY
elif recurrence_rules.frequency == "MONTHLY":
rr_freq = MONTHLY
elif recurrence_rules.frequency == "YEARLY":
rr_freq = YEARLY
rlist = list(rrule(rr_freq, count=recurrence_rules.recurrences, dtstart=self.datetime_start))
for revent in rlist:
evnt = Event.objects.create(account = self.account, location = self.location, datetime_start = revent, datetime_end = revent, is_original_event = False, event_series = self.event_series)
super(Event, evnt).save(*args, **kwargs)
super(Event, self).save(*args, **kwargs)
However, the real problem I am finding is that using this methodology and saving from the Admin forms, it is creating the recurring events, but if I try to get self.patient which is a M2M field, I keep getting this error:
'Event' instance needs to have a primary key value before a many-to-many relationship can be used
My main question is about this m2m error, but also if you have any feedback on the nested saving for recurring events, that would be great as well.
Thanks much!
If the code trying to access self.patient is in the save method and happens before the instance has been saved then it's clearly the expected behaviour. Remember that Model objects are just a thin (well...) wrapper over a SQL database... Also, even if you first save your new instance then try to access self.patient from the save method you'll still have an empty queryset since the m2m won't have been saved by the admin form yet.
IOW, if you have something to do that depends on m2m being set, you'll have to put it in a distinct method and ensure that method get called when appropriate
About your code snippet:
1/ the recurrence_rules = EventSeries.objects.get(pk=self.event_series.pk) is just redundant, since you alreay have the very same object under the name self.event_series
2/ there's no need to call save on the events you create with Event.objects.create - the ModelManager.create method really create an instance (that is: save it to the database).