The following code works fine until the filter gets first match(es). After that on the following runs in the for loop, the query always returns 0. If in that result object would be a match for the next row also, it doesn't even see that, so I don't see this being a cache issue (which might have been far fetched anyhow).
for row in self._ordr.OrderRows.SalesOrderRow:
available = row.Row_amount - Slot.objects.filter(rows_ids=row.Row_id).count()
if available > 0 or row.Row_id in self.instance.rows_ids:
# some code
Any ideas what am I doing wrong here?
This is the models code on that rows_ids.
from django.db import models
from django_mysql.models import ListCharField
class Slot(models.Model):
rows_ids = ListCharField(base_field=models.IntegerField(), size=10, max_length=(10 * 21), null=True)
Went through the documentation once more, and apparently it's a ListCharField based issue. To find a match in all the options in it, you shouldn't compare it directly to the field, but rather to field__contains. So the correct code is:
for row in self._ordr.OrderRows.SalesOrderRow:
available = row.Row_amount - Slot.objects.filter(rows_ids__contains=row.Row_id).count()
if available > 0 or row.Row_id in self.instance.rows_ids:
# some code
Finally got it to work with this.
Related
Is there a way in Django to achieve the following in one DB hit (Debug Toolbar shows 2 queries)?
q = SomeModel.objects.filter(name=name).order_by(some_field)
if q.count() == 0:
q = SomeModel.objects.all().order_by(some_field)
I want to check if there are objects with a given name. If yes, then return them. If not, return all objects. All done in one query.
I've checked Subquery, Q, conditional expressions but still don't see how to fit it into one query.
Ok, much as I resisted (I still think it's premature optimization), curiosity got the better of me. This is not pretty but does the trick:
from django.db.models import Q, Exists
name_qset = SomeObject.objects.filter(name=name)
q_func = Q(name_exists=True, name=name) | Q(name_exists=False)
q = SomeModel.objects.annotate(
name_exists=Exists(name_qset)
).filter(q_func).order_by(some_field)
Tried it out and definitely only one query. Interesting to see if it is actually appreciably faster for large datasets...
You best bet is to use .exists(), otherwise your code is fine
q = SomeModel.objects.filter(name=name).order_by(some_field)
if not q.exists():
q = SomeModel.objects.all().order_by(some_field)
A lot of websites will display:
"1.8K pages" instead of "1,830 pages"
or
"43.2M pages" instead of "43,200,123 pages"
Is there a way to do this in Django?
For example, the following code will generate the quantified amount of objects in the queryset (i.e. 3,123):
Books.objects.all().count()
Is there a way to add a custom count filter to return "3.1K pages" instead of "3,123 pages?
Thank you in advance!
First off, I wouldn't do anything that alters the way the ORM portion of Django works. There are two places this could be done, if you are only planning on using it in one place - do it on the frontend. With that said, there are many ways to achieve this result. Just to spout off a few ideas, you could write a property on your model that calls count then converts that to something a little more human readable for the back end. If you want to do it on the frontend you might want to find a JavaScript lib that could do the conversion.
I will edit this later from my computer and add an example of the property.
Edit: To answer your comment, the easier one to implement depends on your skills in python vs in JavaScript. I prefer python so I would probably do it in there somewhere on the model.
Edit2: I have wrote an example to show you how I would do a classmethod on a base model or on the model that you need these numbers on. I found a python package called humanize and I took its function that converts these to readable and modified it a bit to allow for thousands and took out some of the super large number conversion.
def readable_number(value, short=False):
# Modified from the package `humanize` on pypy.
powers = [10 ** x for x in (3, 6, 9, 12, 15, 18)]
human_powers = ('thousand', 'million', 'billion', 'trillion', 'quadrillion')
human_powers_short = ('K', 'M', 'B', 'T', 'QD')
try:
value = int(value)
except (TypeError, ValueError):
return value
if value < powers[0]:
return str(value)
for ordinal, power in enumerate(powers[1:], 1):
if value < power:
chopped = value / float(powers[ordinal - 1])
chopped = format(chopped, '.1f')
if not short:
return '{} {}'.format(chopped, human_powers[ordinal - 1])
return '{}{}'.format(chopped, human_powers_short[ordinal - 1])
class MyModel(models.Model):
#classmethod
def readable_count(cls, short=True):
count = cls.objects.all().count()
return readable_number(count, short=short)
print(readable_number(62220, True)) # Returns '62.2K'
print(readable_number(6555500)) # Returns '6.6 million'
I would stick that readable_number in some sort of utils and just import it in your models file. Once you have that, you can just stick that string wherever you would like on your frontend.
You would use MyModel.readable_count() to get that value. If you want it under MyModel.objects.readable_count() you will need to make a custom object manager for your model, but that is a bit more advanced.
I have a simple EXCEL-sheet with names of cities in column A and I want to extract them and put them in a list:
def getCityfromEXCEL():
wb = load_workbook(filename='test.xlsx', read_only=True)
ws = wb['Sheet1']
cityList = []
for i in range(2, ws.get_highest_row()+1):
acell = "A"+str(i)
cityString = ws[acell].value
city = ftfy.fix_text_encoding(cityString)
cityList.append(city)
getCityfromEXCEL()
With a small file that worked perfectly (70 rows). Now I'm processing a big file (8300 rows) and it gives me this error:
/Library/Python/2.7/site-packages/openpyxl/workbook/names/named_range.py:121: UserWarning: Discarded range with reserved name
warnings.warn("Discarded range with reserved name")
but it does not abort. It just does not seem to continue anymore. Can someone tell me what might cause the error? Is it something in the .xlsx? Any special hints what I can look for?
It's supposed to be a friendly warning letting you know that some of the defined names are being lost when reading the file. Warnings in Python are not exceptions but informational notices.
Support for defined names is essentially limited to references to cell ranges in openpyxl at the moment. But they can refer to lots of other things like printing settings. However, if the objects/values they refer to are not preserved by openpyxl and the file is saved and later opened by Excel it might complain about the missing objects.
If you want to ignore it:
import warnings
warnings.simplefilter("ignore")
wb = load_workbook(path)
warnings.simplefilter("default")
In my case this warning shows up when filtering is on one of my worksheets. I wanted to suppress the warning so that it didn't bother my users and I just put this line in my code before the openpyxl.load_workbook call:
warnings.simplefilter("ignore")
I am newbie and trying to make my Unit Test pass but having problems with DateTimeField.
In my settings, I have USE_TZ = True and TIME_ZONE set.
Using MongoDb.
First the test is giving me an error complaining about comparing offset-naive and offset-aware. Changed auto_now_add=True to datetime.datetime.utcnow().replace(tzinfo=utc))
I Still couldn't get the right time and date to my TIME_ZONE.
After I put these in my Database (settings.py)
'OPTIONS' : {
'tz_aware' : True,
}
Now I can change my TIME_ZONE and the time and date shows my localtime, not utc.
But when I run a test model:
nf.data_emissao = timezone.now()
...
#check if the nf is in database
lista_nfse = Nfse.objects.all()
self.assertEquals(lista_nfse.count(), 1)
nfse_no_banco = lista_nfse[0]
...
self.assertEquals( nfse_no_banco.data_emissao, nf.data_emissao)
My test fails:
AssertionError: datetime.datetime(2013, 8, 10, 2, 49, 59, 391000, tzinfo=
<bson.tz_util.FixedOffset object at 0x2bdd1d0>) != datetime.datetime(2013, 8, 10, 2, 49, 59,
391122, tzinfo=<UTC>)
I see the diff between 391000 and 391122 but don't know how to fix that.
The problem look to be that you are comparing two values that were assigned with the time 'now' at two different points in time.
Writing unit tests to work with automatically generated dates is always tricky when trying to assert exact date values due to the ever changing nature of time. However, there are a couple of techniques that can be used to help create reliable tests in these scenarios:
If you are trying to assert 'nfse_no_banco.data_emissao contains the time now', instead of trying to assert an exact value you could assert that the time field value falls within the last x milliseconds of time. This allows you to gain a fair level of confidence that the value in the field was 'now' at the time it was assigned, but the downsides are (a) your test could unreliable if the execution time of the test happens to take longer than x milliseconds and (b) the test would return a false positive if for some reason the value was incorrectly assigned a time very close to now due to a programming error (which is highly unlikely).
You can monkey-patch datetime.datetime.utcnow to your own version of the method that returns a pre-set value for testing purposes, and then assert that value was assigned to nfse_no_banco.data_emissao. The downside is that it adds a little complexity to your test setup and teardown. However, it should result in a good test if the goal of your assertion is to verify that the field has been assigned the time now.
You can simply assert that the value of the field is not null (using self.assertNotNull(nfse_no_banco.data_emissao)) - although this is a much weaker assertion, in cases where you are using some framework functionality (such as auto_now_add=True in Django) then often this will suffice - of course the major upside to this test is it's very simple and reliable.
The best approach really depends on what you are trying to assert. From your question it appears that you are really trying to assert that nfse_no_banco.data_emissao was assigned the time now, and you are doing this yourself (rather than relying on a framework to do it for you), and therefore the second approach would make most sense.
Below is pseudo-code showing how you could do this in your test:
# Create a constant with a fixed value for utcnow
NOW = datetime.datetime.utcnow()
# Define a test method to replace datetime.datetime.utcnow
def utcnow_fixed_value():
return NOW
class MyTest(TestCase):
def setUp(self):
# Replace the real version of utcnow with our test version
self.real_utcnow = datetime.datetime.utcnow
datetime.datetime.utcnow = utcnow_fixed_value
def tearDown(self):
# Undo the monkey patch and replace the real version of utcnow
datetime.datetime.utcnow = self.real_utcnow
def test_value_is_now(self):
lista_nfse = Nfse.objects.all()
self.assertEquals(lista_nfse.count(), 1)
nfse_no_banco = lista_nfse[0]
...
self.assertEquals(NOW, nfse_no_banco.data_emissao)
Based on django-schedule. I can't find the guy who made it.
Pardon me if I'm missing something, but I've been trying to get an events occurrences, preferably for a given day.
When I use event.get_occurrence(date), it always returns nothing. But when I use event.get_occurrences(before_date, after_date), suddenly the occurrences on the previously attempted date show up.
Why won't this work with just one datetime object?
This difference is probably in the actual design of these two methods. Frankly, get_occurrence is rather flawed, in general. A method like this should always return something, even if it's just None, but there's scenarios where it doesn't return at all. Namely, if your event doesn't have an rrule, and the date you passed get_occurrence isn't the same as your event's start, then no value is returned.
There's not really anything that can be done about that. It's just flawed code.
Based on the above comment especially in the case when the event doesn't return and occurrence, the following snippet below can force the retrieval of an occurrence especially when you are sure that it exists
from dateutil.relativedelta import relativedelta
def custom_get_occurrence(event,start_date):
occurrence = event.get_occurrence(start_date)
if occurrence is None:
occurrences = event.get_occurrences(start_date, start_date+relative_delta(months=3)
result = filter(lambda x: x.start==start_date,occurrences)
occurence = result[0]
The above code resolves issue that might occur when the default get_occurrence doesn't return a result.