To simplify my problem, let's say I have a simple User model that happen to have an IntergerField containing a score in a game.
I'd like to store the time and the value of the score every time the core is modified. For example, let's say user A starts with score=0 when time=0, then wins 5 points at time=3. I'd change in the view the value of the score, but I'd also like to be able to query another attribute that would give me {0:0, 3:5}
The app will need to be used with MySQL, PostgresSQL and probably SQLite
Use a separate model for the score, having at least value and time fields and a fkey to your user model. Then access the current score from the user by:
user.score_set.order_by('-time').first()
Related
I want to make a site where people can make listings for things to sell. I want it to have a front page where the most popular (hot) items are always displayed.
Popularity decreases with time and increases with activity (bidding, commenting, clicking). Every item starts with a popularity of 100.
That way uninteresting items dissapear quickly and interesting ones stay on longer.
So everytime a user interacts with the objects its popularity should increase (for example, everytime a get request from a unique user is made for the details of the object, its popularity value goes up by 1, every bid increases it by 10).
On the opposite, everytime a minute or so passes, the popularity of all currently active items decreases. Once it hits 0, it will be "deactivated" it will still be tradable, but it will never hit the frontpage again.
The problem is, how do I decrease the popularity of a queryset of all active items?
I realize that everytime the user request the front page. I could just fetch all active objects, calculate the popularity within python code and sort them by hand, but that seams rather wastefull.
I know I can easily set a property of an entire queryset, by using the update function, but that only takes one absolute value for the entire set. Is there a built in way to just decrease the property by one?
Or do I just have to loop through the queryset and decrease every value manually?
class Item(models.Model):
popularity = models.IntegerField()
Item.objects.update(popularity=models.F('popularity')-1)
This is how you update a queryset based on the values it has instead of giving it an absolute value, you can tweak this around to fit your needs.
I have a customer table in DynamoDB with basic attributes like name, dob, zipcode, email, etc. I want to add another attribute to it which will keep increasing with time. For example, each time the user clicks on a product (item), I want to add that to the record so that I have the full snapshot of the customer's profile in a single value indexed by the customerId. So, my new attribute would be called viewedItems and would be a list of itemIds viewed (along with the timestamp).
However, given the 4KB size limit for DynamoDB value, it is going to be surpassed with time as I keep adding the clicked products to the customer profile.
How can I best define my objects so as to perform the following?
Access the full profile of the customer by customerId, including the views.
Access time filtered profile of the customer (like all interactions since last N days), in which case the viewed items should be filtered by the given time range.
Scan the entire table with a time filter on viewedItems.
The query needs to be performant as the profile could be pulled at request time.
Ability to update individual customer record (via a batch job, for example, that updates each customer's record if need be).
One way to do this would be to create a different table (say customer_viewed_items) with hash key customerId and a range key timestamp with value being the itemId that the customer viewed. But this looks like an increasingly complicated schema - not to mention twice the cost involved in accessing the item. If I have to create another attribute based on (say) "bought" items, then I'll need to create another table. So, the solution I have in mind does not seem good to me.
Would really appreciate if you could help suggest a better schema/approach.
As soon as you really don't know how many items will be viewed by user (edge case - user opens all items sequentially, multiple times) - you cannot store this information in single dynamodb record.
The only solution is to normalize your database and create separate table like you've described.
Now, next question - how to minimize retrieval cost in such scheme? Usually you don't need to fetch all viewed items, probably you want to display some of them, then you need to fetch only last X.
You can cache such items in main table customer, ie - create field "lastXviewedItems" and updated it, so it contains only limited number of items without breaking size limit, of course for BI analysis - you will have to store them in 2nd table too.
I am currently building a Django application where visitors can buy an online course. I now want to implement the possibility to provide discount codes. As these discount codes should be limited by quantity I now have the following implementation idea:
Guest visits www.page.com?discount=TEST
The model discount contains the fields discount_codes & max qty. I will check here, if the code exists. Also, I have to count all entries in my order model that used the discount code TEST. My order model contains the foreign_key field 'redeemed_discounts').
As soon the user clicks on Pay (via Stripe) I'll once again count all the orders in my order model which contain 'TEST' to make sure, the 'max_qty' is not reached meanwhile.
Now I can charge the visitor.
Would you consider this as good implemented or do you see any problems with the way I am planning to do it?
instead of using max_qty why don't you use something like use_left and max_use
so whenever someone uses that code you can reduce the count accordingly and when count hits zero you can stop using that with this approach you don't have to scan order table every time to see if the coupon code is still available.
I use django 1.10.1, postgres 9.5 and redis.
I have a table that store users votes and looks like:
==========================
object | user | created_on
==========================
where object and user are foreign keys to the id column of their own tables respectively.
The problem is that in many situations, I have to list many objects in one page. If the user is logged in or authenticated, I have to check for every object whether it was voted or not (and act depending on the result, something like show vote or unvote button). So in my template I have to call such function for every object in the page.
def is_obj_voted(obj_id, usr_id):
return ObjVotes.objects.filter(object_id=obj_id, user_id=usr_id).exists()
Since I may have tens of objects in one page, I found, using django-debug-toolbar, that the database access alone could take more than one second because I access just one row for each query and that happens in a serial way for all objects in the page. To make it worse, I use similar queries from that tables in other pages (i.e. filter using user only or object only).
What I try to achieve and what I think it is the right thing to do is to find a way to access the database just once to fetch all objects voted filtered by some user (maybe when the user logs in in or the at the first page hit requiring such database access), and then filter it further to whatever I want depending on the page needs. Since I use redis and django-cacheops app, can it help me to do that job?
In your case I'd better go with getting an array of object IDs and querying all votes by user's ID and this array, something like:
object_ids = [o.id for o in Object.objects.filter(YOUR CONDITIONS)]
votes = set([v.object_id for v in ObjVotes.objects.filter(object_id__in=object_ids, user_id=usr_id)]
def is_obj_voted(obj_id, votes):
return obj_id in votes
This will make only one additional database query for getting votes by user per page.
I am a bit stuck on how I should model this out.
Here is what I have:
I have a model called Location. In this model I have postal code, city, region, longitude, and latitude. This data is pre-populated with all of Canada's stuff. You can imagine this table is quite large.
This is what I would like to achieve by stuck on how to model this:
I would like to create a second model called Item. Each one of these items will need to be tied to a location from the said above model. The user-story would be as follows:
User adds an item: I already know their postal code and city based on their cookie that I set.
User submits the form with their item: this is where I am confused as to how to model this data so that the item gets saved in the proper location.
I figured a FK would be the way to go but that is waaaaay to inefficient for a number of obvious reasons (huge list, and requires user input but I already know their location before saving). So, since I already know their location based on their cookie, should create a new field in the Item model called location and just save the postal code in this model? If I did this I guess I would have to query the location model for that location to pull in proper info. I am not sure what the best to go about this is, please help.
If you already know the user's location, and they're just entering an item, then the Item model should have a foreign key to Location, but you don't prompt for it on the form. Instead, fill in the Location before you save the item.
If you're using a ModelForm, then you'll want to exclude your location field so that it isn't displayed. You'll also want to set commit=False so that you can fill in the location yourself before saving the form data to the Item table.