What is the difference between GetListItemChangesSinceToken and GetListItemChangesWithKnowledge?
Here is the awesome summary documentation, and about all that is said on the matter:
GetListItemChangesSinceToken: Returns changes made to the list since the date and time specified in the [change] token.
GetListItemChangesWithKnowledge: Returns all of the list items that meet specified criteria and that have changed since the date-time specified in the knowledge parameter for the specified list.
One takes a "change token" and the other takes "knowledge". However, I have not been able to find any documentation (or rationale) as to what advantage one has over the other, why they both exist, how they are fundamentally different, or which one is appropriate to use in protocol clients.
These SOAP services are formally defined in the [MS-LISTSWS]: Lists Web Service Protocol Specification protocol, but they seem identical, excepting the token they expect and emit. (Perhaps it is just the number of undocumented bugs?)
While GetListItemChangesWithKnowledge does have an additional syncScope parameter, MS-LISTWS says:
[syncScope] MUST be null or empty ... [syncScope] is reserved and MUST be ingored
Any input -- especially first-hand knowledge -- is greatly appreciated.
You're probably right about the number of bugs being the difference...
Here is what I could find about both methods:
GetListItemChangesWithKnowledge (different MSDN documentation)
SharePoint 2010: Lists.GetListItemChangesWithKnowledge Method suggests that this method was introduced with SharePoint 2010 and SharePoint Workspace synchronization - I couldn't verify this though
The important bit is "returns all of the list items that meet specified criteria and that have changed since the date-time specified in the knowledge parameter for the specified list"
Diving further in: The knowledge element contains "Microsoft Sync Framework knowledge data structure" (MSDN), which for example is explained here (Microsoft Sync Framework, Part 2: Sync Metadata).
GetListItemChangesSinceToken (different MSDN documentation)
Should be used instead of GetListItemChanges according to MSDN (see link above). I'm assuming it should be used because the Change element further specifies the list item to get, as it says "If Nothing is passed, all items in the list are returned."
The changeToken actually contains something from the Change Log, which in turn has information about Adds, Deletes, Renames etc. --> This is useful if you have in-depth synchronization in your application
On Synchronizing with Windows SharePoint Services, Part 1 the snychronization is explained, including a bit information on the changeToken.
Summary: It looks to me that the ...WithKnowledge method is a bit more complex as it is using the Microsoft Sync's Framework query syntax which includes a time constraint for changes. The ...SinceToken method only queries for all changes with specified action (e.g. Delete) without time constraint.
Ask yourself: Do you really want to implement such complicated methods with lacking documentation which are subject to change? I would suggest doing two things: Analyze (e.g. via Fiddler) the traffic Microsoft Workspace 2010 is generating (also check Word/Outlook). What methods is is using? Could you implement something similar? Isn't GetListItemChanges enough for most applications?
Related
Are there any tried/true methods of managing your own sequential integer field w/o using SQL Server's built in Identity Specification? I'm thinking this has to have been done many times over and my google skills are just failing me tonight.
My first thought is to use a separate table to manage the IDs and use a trigger on the target table to manage setting the ID. Concurrency issues are obviously important, but insert performance is not critical in this case.
And here are some gotchas I know I need to look out for:
Need to make sure the same ID isn't doled out more than once when
multiple processes run simultaneously.
Need to make sure any solution to 1) doesn't cause deadlocks
Need to make sure the trigger works properly when multiple records are
inserted in a single statement; not only for one record at a time.
Need to make sure the trigger only sets the ID when it is not already
specified.
The reason for the last bullet point (and the whole reason I want to do this without an Identity Specification field in the first place) is because I want to seed multiple environments at different starting points and I want to be able to copy data between each of them so that the ID for a given record remains the same between environments (and I have to use integers; I cannot use GUIDs).
(Also yes, I could set identity insert on/off to copy data and still use a regular Identity Specification field but then it reseeds it after every insert. I could then use DBCC CHECKIDENT to reseed it back to where it was, but I feel the risk with this solution is too great. It only takes one time for someone to make a mistake and then when we realize it, it would be a real pain to repair the data... probably enough pain that it would have made more sense just to do what I'm doing now in the first place).
SQL Server 2012 introduced the concept of a SEQUENCE database object - something like an "identity" column, but separate from a table.
You can create and use sequence from your code, you can use the values in various place, and more.
See these links for more information:
Sequence numbers (MS Docs)
CREATE SEQUENCE statement (MS Docs)
SQL Server SEQUENCE basics (Red Gate - Joe Celko)
I am currently developping a windows application who test railroad equipments to find any defaults.
Utility A => OK
Utility B => NOK
...
This application will check the given equipment and generate a report.
This report needs to be written once, and no further modifications are allowed since this file can be used as working proof for the equipment.
My first idea was ta use pdf files (haru lib looks great), but pdf can also be modified.
I told myself that I could obsfuscate the report, and implement a homemade reader inside my application, but whatever way I store it, the file would always be possibly accessed and modified right?
So I'm running out of ideas.
Sorry if my approach and my problem appear naive but it's an intership.
Thanks for any help.
Edit: I could also add checksums for files after I generated them, and keep a "checksums record file", and implement a checksums comparison tool for verification? just thought about this.
I believe the answer to your question is to use any format whatosever, and use a digital signature anybody can verify, e.g., create a gnupg, get that key signed by the people who require to check your documents, upload it to one of the key servers, and use it to sign the documents. You can publish the documents, and have a link to your public key available for verification; for critical cases someone verifying must be trust your signature (i.e., trust somebody who signed your key).
People's lives depend on the state of train inspections. Therefore, I find it hard to believe that someone expects you to solve this problem only using free-as-in-beer components.
Adobe supports a strong digital signature model. If you buy into their technology base, you can create PDF's that are digitally signed, and are therefore tamper-evident, as the consumer can check for the signature.
You can, as someone else pointed out, use GNUpg, or for that matter OpenSSL, to implement your own signature scheme, but railroad regulators are somewhat less likely to figure out how to work with it.
I would store reports in an encrypted/protected datastore.
When a user accesses a report (requests a copy, the original is of course always in the database and cannot be modified), it includes the text "Report #XXXXX". If you want to validate the report, retrive a new copy from the system using the Report ID.
I've been working with Endeca at arms length for three years. Now I need to write my first dynamic business Rule.
I have records with a property, say "ActiveField", as a business Rule I need to take the value of "ActiveField" and return the records that match it. I'll restrict it to 20 with the Style.
I've read about writing Dynamic Business Rules, and I've gone through the dialogue box. I can't find where I'd need to write the logic that makes the matches. If it was SQL I expect I'd type in:
SELECT record.name record.id Where record.ActiveField = #ActiveField
I appreciate Endeca might not work like this, or convey this functionality in drop-down boxes which are written to XML config files.
But I can't find any hint of this level of complexity in the documentation; I'm probably missing something since this is fundamental.
Business rules are triggered by search / navigation states, not by records.
Rules can be created in several places depending on your deployment:
1 Developer studio
2 Merchandising Workbench (page builder or rule manager)
3 Experience Manager (which has replaced Merchandising
workbench in the most recent releases).
In any of these locations you can set a trigger for your rule which can be either a search term or a dimension value(s), or combination of the two.
The actual records returned do not effect whether or not the rule is triggered. At that point your application has to take over a do something with the rule.
Best of luck.
A "checkResult" service deployed on a node machine is defined to return the result on the node to a cluster controller that sends the request.The result on node ,which is in the form of file, may vary drastically in length,as is often the case with daily log files.
At first,i thought it might be ok just using a single string to pack the whole content of the file,so i defined
checkResult(inType *in,OutType *out)
where the OutType* is char*. Then i realized that the string could be in KB length or even more. So i wonder whether it is proper to use string here.
I googled a lot and could not find the max length permitted in wsdl(maybe conflict with the local maxbuffer length as well) and did not find any information about transferring a file type parameter either.
Using struct type may be suggested ,but it could be so nested for the file and difficult to parse when some of the elements inside could be nil and absent.
What'd you do when you need to return a file type result or large amount of data in a webservice?
p.s the server and client both in C.
When transferring a large amount of data in a (SOAP) web service request or response, it is generally better practice to use an attachment mechanism versus including the data as part of the body. Probably the order for considering attachment mechanism (broadest to narrowest adoption):
Message Transmission Optimization Mechanism (MTOM) - The newest of these specifications (http://www.w3.org/TR/soap12-mtom/) which is supported in many of the mainstream languages.
SOAP with Attachments - This specification (http://www.w3.org/TR/SOAP-attachments) has been around for many years and is supported in several languages but notably not by Microsoft.
Direct Internet Message Encapsulation (DIME) - This specification (http://bgp.potaroo.net/ietf/all-ids/draft-nielsen-dime-02.txt) was pushed by Microsoft and support has been provided in multiple languages/frameworks including java and .NET.
Ideally, you would be able to work with a framework to give you code stub generation directly from a WSDL indicating MTOM-based web service.
The critical parts of such a WSDL document include:
MTOM policy declaration
Policy application in the binding
Placeholder for the reference to the attachment in the types (schema) section
If you are working contract-first and have a WSDL in hand, the example in section 1.2 of this site (http://www.w3.org/Submission/WS-MTOMPolicy/) shows the simple additions to be made to declare and apply the MTOM policy. Appendix I of the same site shows an example of a schema element which allows a web service client or server to identify a reference to the MTOM attachment.
I have not implemented a web service or client in C, but a brief scan of recently-updated packages revealed gSoap (http://www.cs.fsu.edu/~engelen/soap.html) as a possibility for helping in your endeavors.
Give those documents a look and see if they help to advance your project.
I'm trying to think of the correct design for a web service. Essentially, this service is going to perform a client search in a number of disparate systems, and return the results.
Now, a client can have various pieces of information attached - e.g. various pieces of contact information, their address(es), personal information. Some of this information may be complex to retrieve from some systems, so if the consumer isn't going to use it, I'd like them to have some way of indicating that to the web service.
One obvious approach would be to have different methods for different combinations of wanted detail - but as the combinations grow, so too do the number of methods. Another approach I've looked at is to add two string array parameters to the method call, where one array is a list of required items (e.g. I require contact information), and the other is optional items (e.g. if you're going to pull in their names anyway, you might as well return that to me).
A third approach would be to add additional methods to retrieve the detail. But that's going to explode the number of round trips if I need all the details for potentially hundreds of clients who make up the result.
To be honest, I'm not sure I like any of the above approaches. So how would you design such a generic client search service?
(Considered CW since there might not be a single "right" answer, but I'll wait and see what sort of answers arrive)
Create a "criteria" object and use that as a parameter. Such an object should have a bunch of properties to indicate the information you want. For example "IncludeAddresses" or "IncludeFullContactInformation".
The consumer is then responsible to set the right properties to true, and all combinations are possible. This will also make the code in the service easier to do. You can simply write if(criteria.IncludeAddresses){response.Addresses = GetAddresses;}
Any non-structured or semi-structured data is best handled by XML. You might pass XML data via a string or wrap it up in a class adding some functionality to it. Use XPathNavigator to go through XML. You can also use XMLDocument class although it is not too friendly to use. Anyway, you will need some kind of class to handle XML content of course.
That's why XML was invented - to handle data which structure is not clearly defined.
Regards,
Maciej