Django order by queryset by requests values (id__in) - django

I query a Django table by a list of IDs
hclistofcases = testcase.objects.filter(id__in="[182, 180, 184, 179, 178, 181, 183"))
That words an returns a queryset, however the queryset is not in the list order (i.e. record 182 first and 183 last). Is there a way to ensure that the quesryset is returned in the list order? I am currently using sqlite as the database
Any hep would be appreciated
Thanks
Grant

empty = testcase.objects.none()
_ = []
for i in [182, 180, 184, 179, 178, 181, 183]:
_.append(testcase.fiter(id=i))
return empty.union(*_)

According to the docs (https://docs.djangoproject.com/en/1.10/ref/models/querysets/#order-by) you can append .order_by(id) to get ascending order.

Related

How to solve error Expected singleton: purchase.order.line (57, 58, 59, 60, 61, 62, 63, 64)

I'm using odoo version 9 and I've created a module to customize the reports of purchase order. Among the fields that I want displayed in the reports is the supplier reference for article but when I add the code that displays this field <span> <t t-esc="', '.join([str(x.product_code) for x in o.order_line.product_id.product_tmpl_id.seller_ids])"/>
but it displays an error when I want to start printing the report
QWebException: "Expected singleton: purchase.order.line(57, 58, 59, 60, 61, 62, 63, 64)" while evaluating
"', '.join([str(x.product_code) for x in o.order_line.product_id.product_tmpl_id.seller_ids])"
PS: I don't change anything in the module purchase.
I don't know how to fix this problem any idea for help please ?
It is because your purchase order got several orderlines and you are hoping that the order will have only one orderline.
o.orderline.product_id.product_tmpl_id.seller_ids
will work only if there is one orderline otherwise you have loop through each orderline. Here o.orderline will have multiple orderlines and you can get product_id from multiple orderline. If you try o.orderline[0].product_id.product_tmpl_id.seller_ids it will work but will get only first orderline details. Inorder to get all the orderline details you need to loop through it.
There are more than one seller ids found . That's why you are getting number of ids here .i.e. purchase.order.line(57, 58, 59, 60, 61, 62, 63, 64). You have to select one of the id among them . To see the result just try this :
o.order_line[0].product_id.product_tmpl_id.seller_ids
If you want to show all of these seller ids on report apply for loop in to the xml.

cannot reindex duplicate axis

I am trying to merge multiple csv files in a folder.
They look like this (there are more than two df's in actuality):
df1
LCC acres
2 10
3 20
4 40
5 5
df2
LCC acres_2
2 4
3 2
4 40
5 6
6 7
I want to put all the dataframes into one list, and then merge them with reduce. To do this they need to have the same index.
I am trying this code:
combined = []
reindex = [2,3,4,5,6]
folder = r'C:\path_to_files'
for f in os.listdir(folder):
#read each file
df = pd.read_csv(os.path.join(folder,f))
#check for duplicates - returns empty lists
print df[df.index.duplicated()]
#reindex
df.set_index([df.columns[0]], inplace=True)
df=df.reindex(reindex, fill_value=0)
#append
combined.append(df)
#merge on 'LCC' column
final = reduce(lambda left, right: pd.merge(left, right, on=['LCC'], how='outer'), combined)
but this still returns:
Traceback (most recent call last):
File "<ipython-input-31-45f925f6d48d>", line 9, in <module>
df=df.reindex(reindex, fill_value=0)
File "C:\Users\spotter\AppData\Local\Continuum\Anaconda2_2\lib\site-packages\pandas\core\frame.py", line 2741, in reindex
**kwargs)
File "C:\Users\spotter\AppData\Local\Continuum\Anaconda2_2\lib\site-packages\pandas\core\generic.py", line 2229, in reindex
fill_value, copy).__finalize__(self)
File "C:\Users\spotter\AppData\Local\Continuum\Anaconda2_2\lib\site-packages\pandas\core\frame.py", line 2687, in _reindex_axes
fill_value, limit, tolerance)
File "C:\Users\spotter\AppData\Local\Continuum\Anaconda2_2\lib\site-packages\pandas\core\frame.py", line 2698, in _reindex_index
allow_dups=False)
File "C:\Users\spotter\AppData\Local\Continuum\Anaconda2_2\lib\site-packages\pandas\core\generic.py", line 2341, in _reindex_with_indexers
copy=copy)
File "C:\Users\spotter\AppData\Local\Continuum\Anaconda2_2\lib\site-packages\pandas\core\internals.py", line 3586, in reindex_indexer
self.axes[axis]._can_reindex(indexer)
File "C:\Users\spotter\AppData\Local\Continuum\Anaconda2_2\lib\site-packages\pandas\indexes\base.py", line 2293, in _can_reindex
raise ValueError("cannot reindex from a duplicate axis")
ValueError: cannot reindex from a duplicate axis
There is problem you need check duplicates of index after setting first column to index.
#set index by first column
df.set_index([df.columns[0]], inplace=True)
#check for duplicates - returns NO empty lists
print df[df.index.duplicated()]
#reindex
df=df.reindex(reindex, fill_value=0)
Or check duplicates in first column instead index, also parameter keep=False return all duplicates (if necessary):
#check duplicates in first column
print df[df.iloc[:, 0].duplicated(keep=False)]
#set index + reindex
df.set_index([df.columns[0]], inplace=True)
df=df.reindex(reindex, fill_value=0)

How to calculate Dynamodb item size? Getting validationerror 400KB Boto

ValidationException: ValidationException: 400 Bad Request
{u'message': u'Item size has exceeded the maximum allowed size', u'__type': u'com.amazon.coral.validate#ValidationException'}
The item object I have, has size of 92004 Bytes
>>> iii
<boto.dynamodb2.items.Item object at 0x7f7922c97190>
>>> iiip = iii.prepare_full() # it is now in dynamodb format e.g. "Item":{"time":{"N":"300"}, "user":{"S":"self"}}
>>> len(json.dumps(iiip))
92004
>>>
The size I get 92004 is less than 400KB, Why do I see the above mentioned error when saving the item?
Any pointers?
EDIT:
I played around with different sizes of data,
>>> i00['Resources'] = "A" * 66848; len(json.dumps(i00))
68481
>>> i = Item(ct.table, data=i00); i.save()
True
>>> i.delete()
True
>>> i00['Resources'] = "A" * 66849; len(json.dumps(i00))
68482
>>> i = Item(ct.table, data=i00); i.save()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/var/www/virtualenv/ken/local/lib/python2.7/site-packages/boto/dynamodb2/items.py", line 455, in save
returned = self.table._put_item(final_data, expects=expects)
File "/var/www/virtualenv/ken/local/lib/python2.7/site-packages/boto/dynamodb2/table.py", line 835, in _put_item
self.connection.put_item(self.table_name, item_data, **kwargs)
File "/var/www/virtualenv/ken/local/lib/python2.7/site-packages/boto/dynamodb2/layer1.py", line 1510, in put_item
body=json.dumps(params))
File "/var/www/virtualenv/ken/local/lib/python2.7/site-packages/boto/dynamodb2/layer1.py", line 2842, in make_request
retry_handler=self._retry_handler)
File "/var/www/virtualenv/ken/local/lib/python2.7/site-packages/boto/connection.py", line 954, in _mexe
status = retry_handler(response, i, next_sleep)
File "/var/www/virtualenv/ken/local/lib/python2.7/site-packages/boto/dynamodb2/layer1.py", line 2882, in _retry_handler
response.status, response.reason, data)
ValidationException: ValidationException: 400 Bad Request
{u'message': u'Item size has exceeded the maximum allowed size', u'__type': u'com.amazon.coral.validate#ValidationException'}
In other words, the size of cloudtrail data has to be less than 68482 bytes. I wonder why they claim it to be 400KB. Clearly, I am missing something.
Answering my own question since it might help someone with the same problem.
I contacted aws technical support and here is the explanation:
I had 5 indexes on my dynamodb table, since the data is replicated for each index; the total data = 68481 * (5 + 1) = 410886 which is close to 400KB.
I feel this is missing from Dynamodb documentation and it'd be nice it amazon adds it.
So, to summarize, the total data (item size) that ends up being saved in dynamodb table is = Acutal data * (number of indexes + 1).
Can you share your input data if no issues? Are you trying to insert bulk data using a flat file as input? Looks like dynamoDB is not able to interpret new line or is treating all records as single record!!
I got a similar error, but for hash key field. I was trying bulk data load using hive scripts. I realized that the attributes should be tab separated, and by fixing the input format, error was fixed for me!!
Try inserting single record at a time. If you don't get the above error, then it is to do with the format of the data!!

checking if two lists are equal in Maple

I've got the following lists :
list1:=[1, 5, 14, 30, 55, 91, 140, 204, 285, 385, 506, 650, 819, 1015,
1240, 1496, 1785, 2109, 2470, 2870]
list2:=[1, 5, 14, 30, 55, 91, 140, 204, 285, 385, 506, 650, 819, 1015,
1240, 1496, 1785, 2109, 2470, 2870]
each generated by a procedure I defined. I need to verify that they are equal, which is the case. However, when I tried to use the evalb function as well as a flag that I was updating during a loop, in both cases, I got 'false' as the answer along with the error message:
"error, final value in a for loop must be numeric or a character"
What I am doing wrong?
Maple will automatically resolve multiple copies of lists with identical entries to the same object. So to test equality, you don't even need to traverse the lists programmatically. You can just do:
evalb(list1=list2);
If however you'd like to do a more sophisticated comparison, you can use the verify command. For example, this will verify that the first list has the second list as a sublist:
verify([1, 2, 3, 4, 5], [2, 3, 4], superlist);
Calling verify with no second argument is equivalent to the first evalb test, e.g.:
verify(list1, list2);

input output in prolog

i want to read characters from a file in prolog and place them in a list.
could some one help me out with it?
thanks
SWI-Prolog offers read_file_to_codes/3. Usage example:
?- read_file_to_codes('/etc/passwd', Codes, []).
Codes = [114, 111, 111, 116, 58, 120, 58, 48, 58|...].