What kind of variable should I use? [closed] - templates

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am using Meteor and have an opinion question.
I have a series of templates that I designed for making interactive tables: sortable column headers, pagination, reactive counter of table's elements, etc. Up until now I have been storing several pieces of information (current page, items per page, and sort order) as session variables so that it was easy to access them from every template regardless of their relationship (parent, sibling...) to eachother.
This has worked alright until now, but now I want multiple tables on the same page. Since I have statically-named session variables, information gets overwritten by other tables on the page.
I am working on a bunch of solutions and welcome other suggestions. What do you guys think?
1) Name every table and store all the information for every table on the site in a giant session variable, which would be an object keyed by tables' names. The downside here is that every table would need a unique name and I'd have to keep track of that. The upside is that implementing the table in new parts of the system could be easier than ever. Also, table sort/filter/page information would be stored even when leaving pages (but could be overridden if that were desired).
2) On the template that contains all the table parts, define reactive variables, then explicitly pass those down to lower levels with helpers. This would help with our goal of cleansing the system of session variables floating around (not that all session variables are bad), but would be a trickier refactor and harder to implement new tables with. Information would not be remembered when navigating between pages.
3) Each table template could reference the parent's reactive variables (messy, but possible) and look for specifically named ones (such as "table_current_page"). This would make setup of new tables easier than #2, but would only allow one table per template (but multiple per page would still be possible).
None of these are quite ideal, but I am leaning towards #1 or something like it. Suggestions? Thanks!

As the other user commented on your question, opinion based questions are off-topic on SO. But anyway here is my opinion,
Option 1: I would not use this! This way, even if one of the reactive parameters for one table changes all other tables helpers will re-run and those tables will also re-render. As your application grows and probably when it is time to have more than 4-5 tables at a time, you might feel the performance is not that good.
Option 2: I will definitely use this, this (as you mentioned) is very cleaner way. No performance impact (like the one mentioned in option 1), even if you have multiple tables in the same page.
Option 3: If you do this you will have very strong dependency between those templates. So all child templates cannot be used independently elsewhere.
So if you have enough time then go for option 2. If not, option 1 with a slight change, that is, instead of one large session variable, you can use multiple smaller session variables that have a unique table name as prefix or suffix. This does pollute your Session variables but it will not have performance impact.
This is my opinion.

Related

Comparative database metrics for different implementations of Django Model [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Considering two approaches in designing Django models:
1 - Very few models, but each model has a long list of fields.
2 - Many models, each model has significantly shorter list of fields. But the models employ multi-level hierarchical one-to-many structure.
Suppose they both use Postgres as the database. In general, what would be the comparative database size, latency in retrieving, processing and updating data for these two approaches?
In short: define models based on the business logic.
People often aim to optimize too early in the development process. As Donald Knuth said:
Premature optimization is the root of all evil.
You should see tables as a storage device for entities. For example if you are making an e-commerce website. It makes sense that there is a model for Products, a model for Order, and a model in between (the junction table of a many-to-many relation between Product and Order) that determines how many times the product appears in the order.
By modeling this based on data, and not on a specific use case, it is usually simpler to add new features to your application. Furthermore it makes querying simpler, and therefore often has a positive effect on the overall performance.
Furthermore it is of importance to get used to the Django ORM tooling, and especially, as #markwalker_ says, with select_related(…) and prefetch_related(…) method calls to load in bulk data that is related to the data you are retrieving. Often the number of queries to the database is already a strong indicator how efficient that program will run, not that much the exact queries: if your application makes a lot of queries, even simple ones, the number of roundtrips to the database will slow down the application significantly. If there is a bottleneck somewhere, then you can run a profiler and try to find parts of the code that needs to be optimized.
There is for example a package named nplusone [GitHub] and scout can detect N+1 query problems that thus can be resolved with select_related(…) and prefetch_related(…).

Django Best Practices -- Migrating Data

I have a table with data that must be filled by users. Once this data is filled, the status changes to 'completed' (status is a field inside data).
My question is, is it good practice to create a table for data to be completed and another one with completed data? Or should I only make one table with both types of data, distinguished by the status?
Not just Django
This is actually a very good general question, not necessarily specific to Django. But Django, through easy use of linked tables (ForeignKey, ManyToMany) is a good use case for One table.
One table, or group of tables
One table has some advantages:
No need to copy the data, just change the Status field.
If there are linked tables then they don't need to be copied
If you want to remove the original data (i.e., avoid keeping redundant data) then this avoids having to worry about deleting the linked data (and deleting it in the right sequence).
If the original add and the status change are potentially done by different processes then one table is much safer - i.e., marking the field "complete" twice is harmless but trying to delete/add a 2nd time can cause a lot of problems.
"or group of tables" is a key here. Django handles linked tables really well, so but doing all of this with two separate groups of linked tables gets messy, and easy to forget things when you change fields or data structures.
One table is the optimal way to approach this particular case. Two tables requires you to enforce data integrity and consistency within your application, rather than relying on the power of your database, which is generally a very bad idea.
You should aim to normalize your database (within reason) and utilize the database's built-in constraints as much as possible to avoid erroneous data, including duplicates, redundancies, and other inconsistencies.
Here's a good write-up on several common database implementation problems. Number 4 covers your 2-table option pretty well.
If you do insist on using two tables (please don't), then at least be sure to use an artificial primary key (IE: a unique value that is NOT just the id) to help maintain integrity. There may be matching id integer values in each table that match, but there should only ever be one version of each artificial primary key value between the two tables. Again, though, this is not the recommended approach, and adds complexity in your application that you don't otherwise need.

API best practice - generic vs ad hoc methods [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm creating REST API that will be used by a web page.
There are several types of data I should provide, and I'm wondering what would be the best practice:
Create one method that will return a complex object with all the needed data.
Pros: one call will be needed from the UI side to get all the data.
Cons: not generic solution at all.
Create multiple autonomous method.
Pros: generic enough to be used in the future by other components.
Cons: will require the UI to make several calls to the server.
Which one adheres more to best practices?
It ultimately depends on your environment, the data-size and the quantity of methods. But there are several reasons to go with the second option and only one to go with the first.
First option: One complex method
Reason to go with the first: The HTTP overhead of multiple requests.
Does the overhead exist? Of course, but is it really that high? HTTP is one of the lightest application layer protocols. It is designed to have little overhead. It's simplicity and light headers are some of the main reasons to its success.
Second option: Multiple autonomous methods
Now there are several reasons to go with the second option. Even when the data is large, believe me, it still is a better option. Let's discuss some aspects:
If the data-size is large
Breaking data transfer into smaller pieces is better.
HTTP is a best effort protocol and data failures are very common, specially in the internet environment - so common they should be expected. The larger the data block, the greater the risks of having to re-request everything back.
Quantity of methods: Maintainability, Reuse, Componentization, Learnability, Layering...
You said yourself, a generic solution is easier to be used by other components. The simpler and more concise the methods' responsibilities are, the easier to understand them and reuse them in other methods it is.
It is easier to maintain, to learn: the more independent they are, the less one has to know to change it (or get rid of a bug!).
To take REST into consideration here is important, but the reasons to break down the components into smaller pieces really comes from understanding the HTTP protocol and good programming/software engineering.
So, here's the thing: REST is great. But not every pattern in its purest form works in every situation. If efficiency is an issue, go the one-call route. Or maybe supply both, if others will be consuming it and might not need to pull down the full complex object every time.
I'd say REST does not care about data normalization. Having two ways to get at the same data is not going to hurt.

Grails: Templates vs TagLibs. [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
In Grails, there are two mechanisms for modularity in the view layers: Template and TagLib.
While I am writing my own Grails app, I am often facing the same question when I need to write an UI component: do I need to use a template or a TagLib?
After searching the web, I didn't find a lot of best practices or rules of thumb concerning this design decision, so can you help me and tell me:
What is the main difference between the two mechanisms?
In which scenarios, do you use a TagLib instead of a Template (and vice-versa) ?
There is definitely some overlap, but below are several things to think about. One way to think about it is that Template is like a method-level reuse, while TagLibs are more convenient for API-level reuse.
Templates are great for when you have to format something specific for display. For example, if you wan to display a domain object in a specific way, typically it's easier to do it in a template, since you are basically just writing HTML with some . It's reusable, but I think its reusability in a bit limited. I.e. if you have a template, you'd use it in several pages, not in hundreds of pages.
On the other hand, taglibs is a smaller unit of functionality, but one you are more likely to use in many places. In it you are likely to concatenate strings, so if you are looking to create a hundred lines of HTML, they are less convenient. A key feature taglibs allow is ability to inject / interact with services. For example, if you need a piece of code that calls up an authentication service and displays the current user, you can only do that in a TagLib. You don't have to worry about passing anything to the taglib in this case - taglib will go and figure it out from the service. You are also likely to use that in many pages, so it's more convenient to have a taglib that doesn't need parameters.
There are also several kinds of
taglibs, including ones that allow
you to iterate over something in the
body, have conditional, etc - that's
not really possible with templates.
As I said above, a well-crafted
taglib library can be used to create
a re-usable API that makes your GSP
code more readable. Inside the same *taglib.groovy you can have multiple tag definitions, so that's another difference - you can group them all in once place, and call from one taglib into another.
Also, keep in mind that you can call up a template from inside a taglib, or you can call taglibs withing templates, so you can mix and match as needed.
Hope this clears it up for you a bit, though really a lot of this is what construct is more convenient to code and how often it will be reused.
As for us...
A coder is supposed to see specific object presentation logic in template, not anywhere else.
We use taglibs only for isolated page elements, not related to business logic at all. Actually, we try to minimize their usage: it's too easy to write business logic in a taglib.
Templates are the conventional way to go; for instance, they support Layouts (btw, they can be named a third mechanism)

should an organization delete data (wiki, sharepoint, etc) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
There was a similar question discussed around collaboration tools but one point wasn't fully agreed upon. As we now have all of these collaboration and documentation tools (WIKIs, sharepoints, blogs, etc) to keep track of project plans, busienss requirements, technical documentation, etc, the question is "should we ever delete this data". As organizations evolve and reorganize and people come and go, a lot of this data is out of date or no longer relavant or correct.
One thought is that there maybe useful stuff inside this data so keep it around and preserve the info at that time and it would be good to have historic context.
An opposing argument is that this data provides too much noise and can lead to people finding it hard to get the up to date latest data
Thoughts?
We recently dealt with exactly this problem on our internal wiki. It's really important to keep the ratio of signal-noise high, or you will find users will stop using the tool for content, and will find alternative channels. The vast majority of all user searches on an internal knowledge base will be for current information. This strongly suggests that current information should be the easy-to-find default, and out of date content should be dealt with or made less accessible.
For example, in our organisation, there was a widespread perception that 'most' of the information on our intranet was out of date, and therefore could not be relied on. This lead to immense inefficiencies as individuals felt there was no option other than to contact one another directly, call meetings, make personal notes etc., in order to obtain current information. The combined administrative burden on the organisation was huge.
We chose to explicitly deprecate content which was no longer relevant, but had historical value. These pages are prominently marked with a 'deprecated' box at the top of the wiki page, and archived. They are still linked from their logical wiki sections for reference, but are clearly mothballed, and can be easily ignored if not required.
This makes it very clear that the information is not up-to-date. For truly useless old docs (as determined by the orignal author, or the wiki maintainer - me), we delete. But even in these cases, the pages are not truly gone. We use Mediawiki, which preserves the full history of every deleted page. These are still available to administrators, but the benefit of deletion is that they don't appear in searches, and can't be navigated to by ordinary users.
The result for us has been a clear win. We now have an intranet which is genuinely useful to actual users. In the end that's much more important than worrying about endless 'what if this obsolete information is somehow relevant in the future' questions. The vast majority of it will never be required, by anyone, ever.
In short, don't be afraid to rigorously prune old stuff. The signal-to-noise ratio is what really matters.
I suppose a big part of the question is "can we afford to never delete?" as in, does the org have the drive space?
Memory is cheap, but drive space allocation can sometimes be conservative, probably to discouraging projects and departments from being sloppy, etc.
I would say that if the space is there, always backup and version, because with Enterprise stuff, having a paper trail and history is more likely to pay off then be a waste of space. For the terabytes of data that will never get seen again, there is a line of code or documentation or an email that will be priceless when it's needed.
Having said that, I also think redundancy should be avoided. If your wiki has seven articles on basically the same thing, that is not the same as a back up, because it means having to update seven places for every change, and this will lead to misinformation that could count against the value of a backup. If someone needs to know how something worked 2 years ago and pull up the article that didn't get updated (or was just wrong), this has made the entire backup system a risk instead of an asset.
Ironically, I do think when fixing redundancy, that the redundancy should be part of the back up. This is where my viewpoints obviously clash, which is why I think its important to a) always try to centralize sources and have things point to them, and b) fix redundancies early. If you can somehow tie them all together so that a search for that needed info will ensure that the seeker will know of the other 6 articles, that would be an ideal patch, so long as it didn't create a crutch.
Long story short, it's better to archive data that never gets used then to wish you hadn't.