This is more of a philosophical question, and most-likely I'll bet that there are many different opinions - I'd like to get input regarding all of your opinions.
Assume a large database with many different types of variables and many different types of "data" that would go into those variables.
Assuming a "Boolean" value 1/0, yes/no, true/false, male/female - I often use checkboxes or radio buttons - rarely if ever option lists.
For "medium-sized" lists of possibilities (names of 5 people, names of different cars, etc) I'll often use select lists - though I've used a select list for all 50 of the states.
For larger/longer lists I'll go to a jQuery autocomplete list, with local data (non-server).
My questions to you are:
Do you have a different approach to selecting methods for data input?
Do you have a specific "number" of elements at which you'll move from a select list to an autocomplete?
For it'll depend mostly on the application I'm building. Some applications already have certain input methods in place, in which case I often prefer consistency over alternative methods that might be better suited but make the application more chaotic.
With new applications my approach is pretty similar. Booleans often have checkboxes or radio buttons but select lists aren't uncommon either. For more options, select lists are also my preferred choice. When I have a large form with a good amount of select lists, I also to keep booleans in select lists as well in order to keep the form consistent. Having a bunch of select lists and then a checkbox may look a bit odd.
Long lists depend on the content. If searching by typing will improve user experience, an auto-complete list is often a good choice. Most browsers will also jump to a correct select list item when typing though, so I don't always see the use for auto-completion when users would generally type the first few letters of the item anyway.
For very long lists, I've made quite a few server queries for auto-completion as well. E.g., linking to one of 10000+ tickets. In this case, having a slight delay after a key-event before searching can prevent a large serverload.
Related
I'm trying to loop all records displayed in a page, from the selected one to the end of the rows:
For example here, as I'm selecting only the 5th row it will loop through 5th and 6th row (as there are no more rows below)
What I've been trying is this:
ProdOrderLine := Rec;
REPEAT
UNTIL ProdOrderLine.NEXT = 0;
But it will loop through all records in the table which are not even displayed in the page...
How can I loop only the page records from the selected one to the latest?
Try Copy instead of assignment. Assignment only copies values of there field from one instance of record-variable to another, it died not copy filters or keys (sort order).
Alas, I have to mention that this is uncommon scenario to handle records like this in BC. General best practice approach would be to ask user to select all the records he or she needs with the shift+click, ctrl+click or by dragging the mouse. In that case you will use SetSelectionFiler to instantly grab ask the selected records.
This is how it works across the system and this how user should be taught to work. It is a bad idea to add a way to interact with record that only works in one page in the whole system even if users are asking for it bursting into tears. They probably just had this type of interaction in some other system they worked with before. I know this is a tough fight but it worth it. It is for the sake of stability (less coding = less bugs) and predictability (a certain way of interaction works across all the pages) of the system.
In the official documentation there is a text for which I can't totally understand the reason:
When working with time series, do not leverage the transactional behavior of rows. Changes to data in an existing row should be stored as a new, separate row, not changed in the existing row. This is an easier model to construct, and it enables you to maintain a history of activity without relying upon column versions.
The last sentence is not obvious and concrete, so it doesn't convince me. For now, using versioning for updating the cell's data still looks to me like a good fit for the update task. At least versions are managed by BigTable, so it's simplier solution.
Can anybody please provide more obvious explanation of why the versioning shouldn't be used in that use case?
Earlier in that page under Patterns for row key design, a bit more detail is explained. The high level view being that using row keys instead of column versions will:
Make it easier to run queries across your data, allowing for scanning of less data.
Avoid going over the recommended maximum row size.
The one caveat being:
It is acceptable to use versions of a column where the use case is
actually amending a value, and the value's history is important. For
example, suppose you did a set of calculations based on the closing
price of ZXZZT, and initially the data was mistakenly entered as
559.40 for the closing price instead of 558.40. In this case, it might be important to know the value's history in case the incorrect value
had caused other miscalculations.
Additional context:
User can buy one or more items every time they shop. I'm trying to figure out the pros/ cons of two approaches. I've written out what I think are the Pros of each (no need to call out Cons since a Con of one can be written as the Pro of the other), but I want to get feedback from the community
Approach 1:
Build a single model, e.g., Items, where there is a record for every item in the transaction.
Pros:
Generally simpler, one model is always nice
Aligns well with the fact that items are priced and cancelled/ refunded individually (i.e., there's not really anything discount or fee occurring at the Purchase level that would either 1) not be allocated to individual items or 2) not merit its own model)
Approach 2:
Build two models, e.g., Purchases and Items, where Purchases is a parent record that represents that transaction, and Items are the child records that represents every item bought in that transaction.
Pros:
For the business, I think it's easier in two ways: 1) it's easier to run analytics to figure out for example how many items people want to buy each time they make a purchase transaction (this isn't impossible with Approach 1, but certainly easier with Approach 2), and perhaps most importantly: 2) from a fulfillment perspective, it seems easier to send the fulfillment center one Purchase with many items since the delivery dates will all be the same, rather than a bunch of Items that they then have to aggregate (again it's not impossible with Approach 1, but much easier with Approach 2)
This can get quite complicated, and in the past I've used far more advanced versions of #2. You want to normalise your data as much as possible (lookup database normalisation for further info) to make it easier to run reports, but also to maintain consistency of data and reduce duplication. In some real-world scenarios, it's not always possible to fully normalise, and processing performance considerations also play a part simetimes - if you fully normalise data, you split your data into many small chunks (e.g. rows in tables) but to reconstruct your data you then have to retrieve it from many locations (e.g. multiple database queries) which has a performance hit.
Go with #2, and thoroughly plan how you are going to structure your data before you get too far into coding it. For a well-structured model, it should be reasonably straightforward to expand on the system in future. A flat structure can become a nightmare to maintain.
Assume that I have a list of employee names from a database (thousands, potentially tens-of-thousands in the near future). To make the problem simpler assume that each firstname/lastname combination is unique (a big if, but a tangent).
I also have a RSS stream of news content that pertains to the business (again, could be in the hundreds of items per day).
What I would like to do is detect if an employees name appears in the several paragraph news item and, if so, 'tag' the item with the person its talking about.
There may be more than one employee named in a single news item so breaking the loop after the first positive match isn't a possibility.
I can certainly brute force things: for every news item, loop over each and every employee name and if a regex expression returns a match, make note of it.
Is there a simpler way in ColdFusion or should I just get on with my nested loops?
Just throwing this out there as something you could do...
It sounds like you'll almost unanimously have significantly more employee names than words per post. Here's how I might handle it:
Have an always-running CF app that will pull in the feeds and onAppStart
Grab all employees from your db
Create an app-scoped look up struct with first names as keys and a struct of last names as values ( you could also add middle names sibling to last names with a 3rd tier if desired ).
So one key in the look up might be "Vanessa" with a struct with 2 keys ( "Johnson" and "Forta" ) as its value.
Then, each article you parse, just listToArray with a space as a delimiter and loop through the array doing a simple structKeyExists with each token. For matches, check the next item in the array as a last name.
I'd guess this would be much more performant processingwise than doing however many searches and also take almost no time to code and you can feed in any future sources extremely simply ( your checker takes one argument, any text on Earth ).
Interested to see what route you go and whether your experiments expose anything new about performance in CF.
Matthew, you have a tall order there, and there are really multiple parts to the challenge/solution. But just in terms of comparing a list of values to a given set of text to see if one of them occur in there, you'll find that there's no one could CF function. BEcause of that, I created a new one, findList, available at cflib:
http://cflib.org/index.cfm?event=page.udfbyid&udfid=1908
It's not perfect, nor as optimal as it could be, but it may be a useful first step or you, or give you some ideas. That said, it suited my need (determine if a given blog comment had reference to any of the blacklisted words). I show it comparing a list of URLs, but it could be any words at all. Hope that's a little helpful.
Another option worth exploring is leveraging the Solr engine that ships with CF now. It will do the string search heavy lifting for you and you can probably focus on dynamically keeping your collections up to date and optimized as new feed items come in.
Good luck!
I have a data entry page where the user is required so make some selections from a list. Currently it is just a check list with about 10 items they can tick, but is will expand soon to about 230. What is a good UI paradigm for dealing with a large number of selectable items? I am considering dual list type control.
Dual list, BUT, for a large # of non-groupable elements:
MUST have ability to select multiple elements (Duh!)
SHOULD have ability to select ALL elements with a click
SHOULD have ability to search (in either list), and select all matching elements
Also, if the lists are REALLY big (1k+), you may run into trouble with slow rendering.
If so, you can also "paginate" the list - e.g. display only first N elements, allow selection from those, and then ability to shift the "frame" to next N elements.
(all the above, BTW, are real attributes of a solution we implemented in an enterprise web app needing a selection list with 30k possible values which could not be grouped).
Are the items grouped in any way? If so, a collapsible tree-type navigation might be useful.
It really depends on the situation and how much space you have but in most cases I prefer the dual list control, aka list builder, you where thinking about.
Here's a nice link for inspiration (requires silverlight): http://good.ly/qh7aeg8
Here's an accessible way using only HTML and Javascript:
Use the HTML fieldset tag to chunk them into logical groups;
use (say) JQuery to show/hide each group;
add navigation at the top to jump to each group.
If you hide all the groups initially, users can click the link for the groups they want to complete. Further, if you add a rollover (could just be a tooltip title attribute on the links for accessibility) with a description of each group, users will have a 'preview' before visiting it.
Finally, if the labels are short enough, give the fieldsets a width and make them into columns using CSS float or absolute positioning.
Try to stick to valid (X)HTML, CSS and Javascript - there are plenty of precedents for this.