Writing my first Dynamic Business Rule - business-rules

I've been working with Endeca at arms length for three years. Now I need to write my first dynamic business Rule.
I have records with a property, say "ActiveField", as a business Rule I need to take the value of "ActiveField" and return the records that match it. I'll restrict it to 20 with the Style.
I've read about writing Dynamic Business Rules, and I've gone through the dialogue box. I can't find where I'd need to write the logic that makes the matches. If it was SQL I expect I'd type in:
SELECT record.name record.id Where record.ActiveField = #ActiveField
I appreciate Endeca might not work like this, or convey this functionality in drop-down boxes which are written to XML config files.
But I can't find any hint of this level of complexity in the documentation; I'm probably missing something since this is fundamental.

Business rules are triggered by search / navigation states, not by records.
Rules can be created in several places depending on your deployment:
1 Developer studio
2 Merchandising Workbench (page builder or rule manager)
3 Experience Manager (which has replaced Merchandising
workbench in the most recent releases).
In any of these locations you can set a trigger for your rule which can be either a search term or a dimension value(s), or combination of the two.
The actual records returned do not effect whether or not the rule is triggered. At that point your application has to take over a do something with the rule.
Best of luck.

Related

Automate sequential integer IDs without using Identity Specification?

Are there any tried/true methods of managing your own sequential integer field w/o using SQL Server's built in Identity Specification? I'm thinking this has to have been done many times over and my google skills are just failing me tonight.
My first thought is to use a separate table to manage the IDs and use a trigger on the target table to manage setting the ID. Concurrency issues are obviously important, but insert performance is not critical in this case.
And here are some gotchas I know I need to look out for:
Need to make sure the same ID isn't doled out more than once when
multiple processes run simultaneously.
Need to make sure any solution to 1) doesn't cause deadlocks
Need to make sure the trigger works properly when multiple records are
inserted in a single statement; not only for one record at a time.
Need to make sure the trigger only sets the ID when it is not already
specified.
The reason for the last bullet point (and the whole reason I want to do this without an Identity Specification field in the first place) is because I want to seed multiple environments at different starting points and I want to be able to copy data between each of them so that the ID for a given record remains the same between environments (and I have to use integers; I cannot use GUIDs).
(Also yes, I could set identity insert on/off to copy data and still use a regular Identity Specification field but then it reseeds it after every insert. I could then use DBCC CHECKIDENT to reseed it back to where it was, but I feel the risk with this solution is too great. It only takes one time for someone to make a mistake and then when we realize it, it would be a real pain to repair the data... probably enough pain that it would have made more sense just to do what I'm doing now in the first place).
SQL Server 2012 introduced the concept of a SEQUENCE database object - something like an "identity" column, but separate from a table.
You can create and use sequence from your code, you can use the values in various place, and more.
See these links for more information:
Sequence numbers (MS Docs)
CREATE SEQUENCE statement (MS Docs)
SQL Server SEQUENCE basics (Red Gate - Joe Celko)

Updating a field in all records in elasticsearch

I'm new to ElasticSearch, so this is probably something quite trivial, but I haven't figured out anything better that fetching everything, processing with a script and updating the registers one by one.
I want to make something like a simple SQL update:
UPDATE RECORD SET SOMEFIELD = SOMEXPRESSION
My intent is to replace the actual bogus data with some data that makes more sense (so the expression is basically randomly choosing from a pool of valid values).
There are a couple of open issues about making possible to update documents by query.
The technical challenge is that lucene (the text search engine library that elasticsearch uses under the hood) segments are read only. You can never modify an existing document. What you need to do is delete the old version of the document (which by the way will only be marked as deleted till a segment merge happens) and index the new one. That's what the existing update api does. Therefore, an update by query might take a long time and lead to issues, that's why it's not released yet. A mechanism that allows to interrupt running queries would be a nice to have too for this case.
But there's the update by query plugin that exposes exactly that feature. Just beware of the potential risks before using it.

the best approaches for logging localization using c++

I am working on a multinational project where target audience for logs might be from two nationalities. Therefore it is becoming important to log in more than one language , I am thinking about writing to 2 different log folders based on language every time I am logging something, but I am also wondering if there's some out of the box functionality that is coming along with logging frameworks like log4cpp?
As other commenters have mentioned, it sounds like you are going down the wrong track by looking to do multilingual logging.
My recommendation would be to use English (which is the standard for technical information, and which I guess is the language you know best) and to make sure that the language you use is clear, grammatically correct and unambiguous. Then if one of the technicians cannot understand it, they can very easily and efficiently run it through a machine translation engine such as Google Translate. Or indeed they could process the logs and run everything through Google Translate to append translated text, particularly if you annotate the logs to mark the language content.
Assuming that the input language is well-written, machine transation usually gives a good result which the end user can understand. If the message isn't clear, has typos or abbreviations, then that's where machine translation fails spectacularly.
Writing log naturally brings down the speed of execution due to file open, seek and write operations involved as part of it.
This is one primary reason why many developer and architects suggest to write log at different levels.Increasing the depth of log entries as level increases to trace down the problems better. At higher level, you will notice that your process speed drops due to more log entries getting generated.
Rather suggest you to use services that can translate from one language to other.
I'm sure there are libraries free or paid which does this translation. You can create a small utility program that runs in the background and does this conversion during process idle time.
Well one suggestion is you can use a different process/thread which listens for your log messages, which you can log it from there ..
This reduces I/O logging time in your main process/thread and you can make all changes related to Logging language over there..
For multi - Lingual support I think you can try writing with widechar string .. though I am not sure..
the best approaches for logging localization using c++
Install Qt 4 and use QObject::tr/ tr() macro for strings. Write strings in whatever language you want. Hire/Get a translator to localize strings using QT Linguist.
Please note that perfect translation is impossible, so there will be many "amusing" misunderstandings, even if your translator is a genius. So it might be a better idea to select main language for programming team.
--EDIT--
Didn't notice this part before:
in more than one language
One way to approach it is to implement log reader. Instead of writing plaintext messages, you could dump message ids (generated by some kind of macros) and string arguments if strings are formatted. "Log reader" will allow user to select desired language while viewing log file, and translate messages based on their ids/arguments using mechanism similar to QTranslator. The good thing about this approach is that you'll be able to add more languages later - so it'll be possible to retranslate old logs. The bad thing is that this format will be harder to read for "normal human", although you can add plaintext messages in addition to message ids and arguments and you'll need to write log viewer.
Qt 4 has most of this framework implemented (there are routines for dumping variants into text/data streams, and so on) along with translation tool. See QTranslator documentation and Linguist manual for more info.

SharePoint: GetListItemChangesSinceToken vs GetListItemChangesWithKnowledge?

What is the difference between GetListItemChangesSinceToken and GetListItemChangesWithKnowledge?
Here is the awesome summary documentation, and about all that is said on the matter:
GetListItemChangesSinceToken: Returns changes made to the list since the date and time specified in the [change] token.
GetListItemChangesWithKnowledge: Returns all of the list items that meet specified criteria and that have changed since the date-time specified in the knowledge parameter for the specified list.
One takes a "change token" and the other takes "knowledge". However, I have not been able to find any documentation (or rationale) as to what advantage one has over the other, why they both exist, how they are fundamentally different, or which one is appropriate to use in protocol clients.
These SOAP services are formally defined in the [MS-LISTSWS]: Lists Web Service Protocol Specification protocol, but they seem identical, excepting the token they expect and emit. (Perhaps it is just the number of undocumented bugs?)
While GetListItemChangesWithKnowledge does have an additional syncScope parameter, MS-LISTWS says:
[syncScope] MUST be null or empty ... [syncScope] is reserved and MUST be ingored
Any input -- especially first-hand knowledge -- is greatly appreciated.
You're probably right about the number of bugs being the difference...
Here is what I could find about both methods:
GetListItemChangesWithKnowledge (different MSDN documentation)
SharePoint 2010: Lists.GetListItemChangesWithKnowledge Method suggests that this method was introduced with SharePoint 2010 and SharePoint Workspace synchronization - I couldn't verify this though
The important bit is "returns all of the list items that meet specified criteria and that have changed since the date-time specified in the knowledge parameter for the specified list"
Diving further in: The knowledge element contains "Microsoft Sync Framework knowledge data structure" (MSDN), which for example is explained here (Microsoft Sync Framework, Part 2: Sync Metadata).
GetListItemChangesSinceToken (different MSDN documentation)
Should be used instead of GetListItemChanges according to MSDN (see link above). I'm assuming it should be used because the Change element further specifies the list item to get, as it says "If Nothing is passed, all items in the list are returned."
The changeToken actually contains something from the Change Log, which in turn has information about Adds, Deletes, Renames etc. --> This is useful if you have in-depth synchronization in your application
On Synchronizing with Windows SharePoint Services, Part 1 the snychronization is explained, including a bit information on the changeToken.
Summary: It looks to me that the ...WithKnowledge method is a bit more complex as it is using the Microsoft Sync's Framework query syntax which includes a time constraint for changes. The ...SinceToken method only queries for all changes with specified action (e.g. Delete) without time constraint.
Ask yourself: Do you really want to implement such complicated methods with lacking documentation which are subject to change? I would suggest doing two things: Analyze (e.g. via Fiddler) the traffic Microsoft Workspace 2010 is generating (also check Word/Outlook). What methods is is using? Could you implement something similar? Isn't GetListItemChanges enough for most applications?

Custom client app - need ability to control where documents are saved

Okay SO. I need some guidance. I apologize for the length of this post, but I need to provide some details:
I've got someone who is interested in me to do a small project for them. The application in general is a fairly straightforward employee record keeping / documentation app, but it makes pretty heavy use templated Word and Lotus documents. The idea is you select the employee “event” such as commendation, promotion, discipline, etc., and it loads the appropriate template doc and you fill it in from there, and later you can select an employee, view all the “events,” and view the individual documents associated with each one.
Thus, the app must know where the .docs are saved when the user is done.
The client actually has a v1 of this app (it doesn’t do any management of the files or anything, just launches Word/Lotus with the document you wanted to view in a new instance, presumably via a system() call.) We’ve not gotten into a detailed requirements phase, but the client and I agree that for this to really work, some kind of control over where the user saves the .doc’s to is going to be critical , because otherwise the app provides them with the new copy of the template doc, they "Save as" somewhere else, and the app is pointing to the blank copy it provided them with.
Obviously, I can’t think of a way to achieve “Save as” restriction/control in any way via just launching a new instance of Word. The client has the idea of an embedded Word/Lotus instance in the app with the template doc when you choose one, but I’ve few reservations with that:
I’ve dug around online and I’ve read that whichever version of Word I borrow MSWORD.OLB from will be the one the end user would require?
I’ve tried to do the MSDN example of embedding a Word doc from here, but as I’ve come to get used to, the MSDN example doesn’t even compile.
Even if I CAN figure out how to embed a .doc file into their application, I don’t know that I could control the use of “Save as…”
All of this STILL hasn’t touched on Lotus (!)
So… instinctively, I feel the embedded Word/Lotus thing has to be more work than it’s worth in the end.
So I’ve had a few other ideas brewing around.
One is looking into using Office XML (and if there’s a lotus equivalent), and get the user’s “inputs” separately and generate the document on the fly each time. I’m not particularly thrilled with that idea, but I think it COULD work, provided I just use old features to try and stay far backwards compatible.
Get user’s “inputs” separately and generate a document in HTML. Meh. Works, very cross platform and easily parsed and understood, but not good if you want to be able to email it to someone (who emails a .html? Works, yes, very unconventional which to the average user will throw them off) and even worse if you need to email it to someone for revisions…
Perhaps some kind of editable PDF? I know there are PDF libraries out there, and the more I stew on it, the more this sounds like the best option, though I’ve not done much work with PDFs and I don’t know how easily embeddable they are / what options one has when creating them. I know they can be save-disabled, I’ve had that with my bloody state taxes before.
I need some input here. Here’s the TLDR questions:
Is launching a new instance of Word for each .doc as bad as I feel, given user can “Save as” document wherever and then application is left pointing to a blank document?
Is trying to support embedded Word as big of a trouble as I feel like it is / more work than it’s worth / likely to cause problems with supporting multiple versions of Word? (Forward compatibility as well as currently released versions?)
What are thoughts on the PDF plan?
Any other good ideas?
Word does allow for programming some "Save" and "Save As" control via its object model. Any subroutines coded in VBA and placed into your Word template will be copied into all documents generated from that template. Additionally, most menu and Ribbon commands can be intercepted by creating a module containing subroutines named for the intercepted commands. So, for example, if a module contains a sub named FileSaveAs(), any code in that sub will be executed instead of the standard File|Save As command. Lastly, this code will replace Save As commands executed via keystroke, toolbar, menu, or Ribbon.
The code below will launch a dialog box to a predetermined path whenever a "Save" or "Save As" command is executed:
Sub FileSave()
ControlSaveLocation
End Sub
Sub FileSaveAs()
ControlSaveLocation
End Sub
Sub ControlSaveLocation()
Dim Directory As String
Directory = "C:\Documents\"
With Application.Dialogs(wdDialogFileSaveAs)
.Name = Directory
.Show
End With
End Sub
Hope this helps.