I want to build a form which serves our IT requests process.
Fields would be..
Request description (1 per request)
Items required (Item + Qty + Cost + ADD NEW LINE)
Overall total Cost (1 per request)
How would I build the ITEMS REQUIRED bit?
Where the 3 fields could be recreated under the previous line
https://www.drupal.org/project/field_collection
This module will solve your problem.
With this module you are able to define multiple fields as one group when you click for add more item then new group of same fields will be display as new item.
Read project desc care fully.
Related
I have model with title and description fields.
I want to create a GIN index for all the words in the title and description field
So I do it the following way using SQL:
STEP1: Create a table with all the words in title and description using simple config
CREATE TABLE words AS SELECT word FROM ts_stat('SELECT to_tsvector(''simple'',COALESCE("articles_article"."title", '''')) || to_tsvector(''simple'',COALESCE("articles_article"."description", '''')) FROM "articles_article"');
STEP2: Create GIN index
CREATE INDEX words_idx ON words USING GIN (word gin_trgm_ops);
STEP3: SEARCH
SELECT word, similarity(word, 'sri') AS sml
FROM words
WHERE word % 'sri'
ORDER BY sml DESC, word;
Result:
word sml
sri 1
srila 0.5
srimad 0.428571
How to do this in DJANGO and also i have to keep updating the GIN index
Django docs suggest that you install the relevant btree_gin_extension and append the following to the model's Meta class:
from django.contrib.postgres.indexes import GinIndex
class MyModel(models.Model):
the_field = models.CharField(max_length=512)
class Meta:
indexes = [GinIndex(fields=['the_field'])]
A relevant answer can be found here.
Regarding the updating of the index, heroku suggests:
Finally, indexes will become fragmented and unoptimized after some
time, especially if the rows in the table are often updated or
deleted. In those cases it may be required to perform a REINDEX
leaving you with a balanced and optimized index. However be cautious
about reindexing big indexes as write locks are obtained on the parent
table. One strategy to achieve the same result on a live site is to
build an index concurrently on the same table and columns but with a
different name, and then dropping the original index and renaming the
new one. This procedure, while much longer, won’t require any long
running locks on the live tables.
I am working on a project where I need to insert large files into models (sometimes several gigabytes). Because the files can be large the approach I am taking is to read it linewise, and then insert it into the Django model.
However, when an error is encountered in the process, how do I cancel the whole operation? What is the proper way to make sure that the rows are committed after the entire file is processed without errors.
The other alternative is to create all the model objects in one go and insert it in bulk, is this feasible for large datasets? How would it work.
Here's my code:
class mymodel(models.Model):
fkey1 = models.ForeignKey(othermodel1,on_delete=models.CASCADE)
fkey2= models.ForeignKey(othermodel2,on_delete=models.CASCADE)
field 1= models.CharField(max_length=25,blank=False)
field 2= models.DateField(blank=False)
...
Field 12= models.FloatField(blank=False)
And inserting data into the model from excel:
wb=load_workbook(datafile, read_only=True, data_only=True)
ws=wb.get_sheet_by_name(sheetName)
for row in ws.rows:
if isthisheaderrow(row):
#determine column arrangement and pass to next
break
for row in ws.rows:
if isthisheaderrow(row):
pass
elif isThisValidDataRow(row):
relevantRow=<create a list of values>
dictionary=dict(zip(columnNames,relevantRow))
dictionary['fkey1']=othermodel1Object
dictionary['fkey2']=othermodel2Object
mymodel(**dictionary).save()
I should have looked harder, the commit can be delayed by a decorator #transaction.atomic. A more detailed description is given here: https://docs.djangoproject.com/en/1.11/topics/db/transactions/
The code being:
wb=load_workbook(trfile, read_only=True, data_only=True)
ws=wb.get_sheet_by_name(sheetName)
revenueSwitch=True
for row in ws.rows:
if ifHeaderReturnIndex(row,desiredColumns):
selectedIndex=ifHeaderReturnIndex(row, desiredColumns)
outputColumnNames=[row[i].value.replace(" ", "") for i in selectedIndex]
#output_ws.append(outputColumnNames)
break
#transaction.atomic
def insertrows():
for row in ws.rows:
if ifHeaderReturnIndex(row,desiredColumns):
pass
elif isRowValid(row,selectedIndex):
newrow=[row[i].value for i in selectedIndex]
dictionary=dict(zip(outputColumnNames,newrow))
dictionary['UniqueRunID']=run
dictionary['SourceFileObject']=TrFile
TransactionData(**dictionary).save()
insertrows()
I have a dynamodb table. And I want to build a page where I can see the items in the table. But since this could have tens of thousand of items, I want to see them in 10 items per page. How do I do that? How to scan items 1000 to 2000?
import boto
db = boto.connect_dynamodb()
table = db.get_table('MyTable')
res = table.scan(attributes_to_get=['id'], max_results=10)
for i in res:
print i
What do you mean by 1000~2000 items?
There is no global order of hash keys (primary or index), thus it's hard to define the 10000~20000 items in advance.
However, it makes perfect sense if you'd like to find the next 1000 items, given the last return item. To fetch the next page, you execute the scan method again by providing the primary key value of the last item in the previous page so that the scan method can return the next set of items. The parameter name is exclusive_start_key and its initial value is None.
See more details in official docs.
I have an existing table called empname in my postgres database
(Projectid,empid,name,Location) as
(1,101,Raj,India),
(2,201,David,USA)
So in the app console it will have like the following
1)Projectid=Textbox
2)Ops =(view,insert,Edit)-Dropdown
Case1:
So if i write project id as 1 and select View Result:It will display all the records for Projectid =1(Here 1 record)
Case2:
If i write projectid as 3 and select insert it will ask for all the inputs like empid,name,address and based on that it will update the table .
Case3:
If i write projectid as 2 and select edit.Then it will show all the field for that id and user can edit any column and can save which will update the records in backend for the existing table
If there is not data found for the respective project id then it will display no records found
Please help me on this as I am stuck up with models
Once you have your models created, the next task should be the form models. I can identify atleast 3 form classes that you will need to create. One to display the information(case 1), another to collect information(case 2) and the last class to edit the information. Wire up the form to the views and add the urls.
A good reference could be a django a user registration form since it will have all the three cases taken care of.http://www.tangowithdjango.com/book17/chapters/login.html
I've got a query...
packages = Package.objects.annotate(bid_count=Count('items__bids'))
Which is supposed to give me a list of packages with the number of bids each. It works great if there's only one item in the package, but if there's more it double counts.
Each package consists of 1 or more items. Each bid is placed on 1 or more items within a package. I want to retrieve the number of bids placed on the items within that package.
If there is 1 bid placed on 2 items within a package, presently this will count as 2, I want it to return 1.
I tried Count('items__bids__distinct') but that didn't work. How can I do this?
I had the same problem and I found the resolution:
packages = Package.objects.annotate(bid_count=Count('items__bids', distinct = True))