Recently I got required of creating Common data item which will be useful in different component by mapping as data source. This Common data item template is having appx. 40 fields ( single line text, drop down list , custom controls etc ). Based on this new template around 500 ( as per business : 500 different offices ) sitecore items will be created.
Based on above data below components will be implemented :
Component x : will use 5 fields of the created common data items
Component y : will use 10 fields of the created common data items.
Similarly, in future multiple Components will be created when required.
Search : searching this 500 common data items and display results ( Using Coveo search) will result in any performance issue ?
Is it a good practice to create 40 fields in one template ? will it create any performance issues in future?
It won't be a particular performance issue using this number of fields In a template. However, a better solution might be to break down these 40 fields into separate logical groupings in separate "base" templates (don't let any items use those base templates directly). Then create templates (which your editors would use) inheriting from a number of your base templates. This approach allows you to re-use fields eliminating duplication but also allows you to easily create templates specific for each purpose. It's easier for editors to deal with items containing just the relevant fields as this eliminates ambiguity and confusion.
Related
My application has models such as the following:
class Employee:
name = attr.ib(str)
department = attr.ib(int)
organization_unit = attr.ib(int)
pay_class = attr.ib(int)
cost_center = attr.ib(int)
It works okay, but I'd like to refactor my application to more of a microkernel (plugin) pattern, where there is a core Employee model that just might just have the name, and plugins can add other properties. I imagine perhaps one possible solution might be:
class Employee:
name = attr.ib(str)
labels = attr.ib(list)
An employee might look like this:
Employee(
name='John Doe'
labels=['department:123',
'organization_unit:456',
'pay_class:789',
'cost_center:012']
)
Perhaps another solution would be to just create an entity for each "label" with the core employee as the ancestor key. One concern with this solution is that currently writes to an entity group are limited to 1 per second, although that limitation will go away (hopefully soon) once Google upgrades existing Datastores to the new "Cloud Firestore in Datastore mode":
https://cloud.google.com/datastore/docs/firestore-or-datastore#in_native_mode
I suppose an application-level trade-off between the list property and ancestor keys approaches is that the list approach more tightly couples plugins with the core, whereas the ancestor key has a somewhat more decoupled data scheme (though not entirely).
Are there any other trade-offs I should be concerned with, performance or otherwise?
Personally I would go with multiple properties for many reasons but it's possible to mix all of these solutions for varying degree of flexibility as required by the app. The main trade-offs are
a) You can't do joins in data store, so storing related data in multiple entities will prevent querying with complex where clauses (ancestor key approach)
b) You can't do range queries if you make numeric and date fields as labels (list property approach)
c) The index could be large and expensive if you index your labels field and only a small set of the labels actually need to be indexed
So, one way to think of mixing all these 3 is
a) For your static data and application logic, use multiple properties.
b) For dynamic data that is not going to be used for querying, you can use a list of labels.
c) For a pluggable data that a plugin needs to query on but doesn't need to join with the static data, you can create another entity that again uses a) and b) so the plugin stores all related data together.
I have a problem I'm trying to solve but I'm at a stand still due to the fact that I'm in the process of learning Qt, which in turn is causing doubts as to what's the 'Qt' way of solving the problem. Whilst being the most efficient in term of time complexity. So I read a file line by line ( file qty ranging between 10-2000,000). At the moment my approach is to dump ever line to a QVector.
Qvector <QString> lines;
lines.append("id,name,type");
lines.append("1,James,A");
lines.append("2,Mark,B");
lines.append("3,Ryan,A");
Assuming the above structure I would like to give the user with three views that present the data based on the type field. The data is comma delimited in its original form. My question is what's the most elegant and possibly efficient way to achieve this ?
Note: For visual aid , the end result kind of emulates Microsoft access. So there will be the list of tables on the left side.In my case these table names will be the value of the grouping field (A,B). And when I switch between those two list items the central view (a table) will refill to contain the particular groups data.
Should I split the data into x amount of structures ? Or would that cause unnecessary overhead ?
Would really appreciate any help
In the end, you'll want to have some sort of a data model that implements QAbstractItemModel that exposes the data, and one or more views connected to it to display it.
If the data doesn't have to be editable, you could implement a custom table model derived from QAbstractTableModel that maps the file in memory (using QFile::map), and incrementally parses it on the fly (implement canFetchMore and fetchMore).
If the data is to be editable, you might be best off throwing it all into a temporary sqlite table as you parse the file, attaching a QSqlTableModel to it, and attaching some views to it.
When the user wants to save the changes, you simply iterate over the model and dump it out to a text file.
How do people deal with index data (the data usually shown on index pages, like a customer list) -vs- the model detail data?
When somebody goes to the customer/index route -- they only need access to a small subset of the full customer resource. Since I am dealing with legacy data, my customer model has > 10 relationships. It seems wasteful to have the api return a complete and full customer representation for every customer just to render a list/select/index view.
I know those relationships are somewhat lazy-loaded, but it still takes effort on the backend to pull all those relationships in. For some relationships (such as customer->invoices) this could be a large list of ids.
I feel answers to this can be very opinionated. But my two cents:
The API you are drawing on for your data should have an end-point to fetch the subset of data you're interested in, e.g. /api/mini-customer vs /api/customer.
You can then either define two separate models (one to represent the model in the list and one to represent the detailed view), or simply populate the original model with the subset of data and merge the extra data in at a later point.
That said, I've also seen plenty of cases such as the one you describe, where you load all data initially and just display the subset to begin with. If it's reasonable that the data will eventually be used and your page-load constraints can handle it, then this can be an acceptable approach.
I am trying to use a value list to show multiple repeating data, but would like the list to not only show in repeating fields, but dwindle or reduce as user makes more selections (almost like a hierarchical tree).
Is this recursion within my relationship or just conditionals with the repeating fields showing data based on whatever the calc may be.
A bit lost, but I would like to know:
Where to start defining the value list(s)
How to connect repeating fields to value list
How to reduce that list based on prev selection
Thanks
It would be worthwhile to start with this article on conditional value lists: http://help.filemaker.com/app/answers/detail/a_id/5833/~/creating-conditional-value-lists
Once you understand that, you should be able to apply similar reasoning to create the best interface for your solution (which could be value lists, or drill-down portals, or some other method)
As an aside, using repeating fields will generally make it harder for you to get data in and out of your database and to perform calculations on the data. You might wish to use individual fields and related tables in place of repeating fields when possible.
I have a model "Messages" which I use to store messages throughout the site. These are messages in discussions, private messages and probably chat. They are all stored in one table. I wonder if it will be faster if I spread messages among several models and tables. One for chat, one for discussions and so on.
So should I keep all messages in one table/model or create several identical models/tables?
As long as you have an index on your type column and filter on that, it will be about the same speed. When your table gets really big, just shard on the type column and it will be the same performance as doing multiple tables but your app will just see one big table.
One "Table" will be better for search purposes (you can "search" on all of the messages at once.
However, multiple tables may benefit from speed.
Why not use abstracted classes?
class MessageBase(models.Model):
subject = models.CharField(max_length=255)
test = models.TextField()
class ChatMessage(MessageBase):
pass
This will create 2 tables, with the table for ChatMessage just referring directly to the table for MessageBase. This will give you the best of both worlds. "Search" using MessageBase to get messages for anything, but save, and refer to, all other messages using it's specific model class.
(please note, the python here might be slightly wrong, as it hasn't been tested, but I'm sure you get the idea!)