SharePoint UserData and the ;# Syntax in returned data - web-services

Can a SharePoint expert explain to me the ;# in data returned by the GetListItems() call to the Lists web service?
I think I understand what they are doing here. The ;# is almost like a syntax for making a comment... or better yet, including the actual data (string) and not just the ID. This way you can use either, but they are nicely paired together in the same column.
Am I way off base? I just can't figure out the slighly different use. For example
I have a list with:
ows_Author
658;#Tyndall, Bruno
*in this case the 658 seems to be an ID for me in a users table somewhere*
ows_CreatedDate (note: a custom field. not ows_Created)
571;#2009-08-31 23:41:58
*in this case the 571 seems to be an ID of the row I'm already in. Why the repetition?*
Can anyone out there shed some light on this aspect of SharePoint?

The string ;# is used as a delimiter by SharePoint's lookup fields, including user fields. When working with the object model, you can use SPFieldLookupValue and SPFieldUserValue to convert the delimited string into a strongly-typed object. When working with the web services, however, I believe you'll need to parse the string yourself.
You are correct that the first part is an integer ID: ID in the site user list, or ID of the corresponding item in the lookup list. The second part is the user name or value of the lookup column.
Nicolas correctly notes that this delimiter is also used for other composite field values, including...
SPFieldLookupValueCollection
SPFieldMultiColumnValue
SPFieldMultiChoiceValue
SPFieldUserValueCollection

The SPFieldUser inherits from the SPFieldLookup which uses the ;# notation. You can easily parse the value by creating a new instance of the SPFieldLookupValue class:
string rawValue = "1;#value";
SPFieldLookupValue lookupValue = new SPFieldLookupValue(rawValue);
string value = lookupValue.LookupValue; // returns value

Related

How do I Loop through a multi-value People Field or Lookup field in SharePoint 2013 designer using REST

I have a multi-valued people picker and a multi-valued Lookup field that I need to read all the entries in a 2013 workflow. I know how to create a workflow that retrieves the data and iterate through each list item using REST and a dictionary. Given I'm iterating through each item, I need to now iterate through each multi-valued field.
In the past, I have done this using a loop iterator and a second dictionary entry representing the data in the multi-valued field, but I don't have access to this code anymore. I can use a loop and use the find function parsing through my responseContent, but this is not reliable since my reponseContent will have multiple records in it and I know it can be done using a second dictionary entry.
My REST query is:
_api/lists/GetByTitle('EmailSetup')/Items?$select=EmailYN,EmailSubject,EmailBody,EmailTo/EMail,Emailcc/EMail,EmailToWorkflowPerson/Title,EmailccWorkflowPersons/Title&$filter=(Title%20eq%20%27BSM%20Review%27)%20and%20(WorkflowName%20eq%20%27ProcessBSMRequests%27)&$expand=EmailTo,Emailcc,EmailToWorkflowPerson,EmailccWorkflowPersons
Where my multi-valued fields are the Emailcc and EmailccWorkflowPerson, (people picker and lookup respectively).
I have my first dictionary as the following data structure that captures the requestHeaders
Accept String application/json;odata=verbose
Content-Type String application/json;odata=verbose
In my first loop I get all my attributes, but not certain how to get the multi-valued fields Emailcc and EmailccWorkflowPersons.
Yes, I can parse through my response, but there's a better way to somehow put these multi-value fields into a structure and then loop through these.
What I need is what is that structure (dictionary) and how do you get the data into that structure and then how do you loop through that structure.
The final result should be of the sort (psuedocode) where Index is which record I am on and Index2 is which multi-value I am on.
d/results([%Varaible: Index%])/Emailcc/Email[%Variable: Index2%])xxx
With a lot of debugging I have gotten half my answer and maybe someone can help me with the other half. The data structure of the data when it comes back via the REST looks like (some masking of our own data):
responseContent={"d":{"results":[{
"__metadata":{"id":"Web\/Lists(guid'c7bb71c8-a9dd-495f-aa5f-4dcacdf8db5c')\/Items(1)","uri":"https:\/\/xxxxx.xxxxxx.xxxxxxxxx.xxx\/hc\/teams\/MES\/_api\/Web\/Lists(guid'c7bb71c8-a9dd-495f-aa5f-4dcacdf8db5c')\/Items(1)","etag":"\"13\"","type":"SP.Data.EmailSetupListItem"},
"EmailTo":{"__metadata":{"id":"b493bee4-ec1a-4b76-a028-11766bdb7e5b","type":"SP.Data.UserInfoItem"},"EMail":"xyxee.dyff#homeward.com"},
"EmailToWorkflowPerson":{"__deferred":{"uri":"https:\/ \/xxxxx.xxxxxx.xxxxxxxxx.xxx\/hc\/teams\/MES\/_api\/Web\/Lists(guid'c7bb71c8-a9dd-495f-aa5f-4dcacdf8db5c')\/Items(1)\/EmailToWorkflowPerson"}},
"Emailcc":{"results":[{"__metadata":{"id":"790a690a-515b-4d07-bba3-73bf325fbbed","type":"SP.Data.UserInfoIt em"},"EMail":"xyxee.dyffns1#homeward.com"},{"__metadata":{"id":"3d77e75c-5fa8-4df6-937c-97e572714843","type":"SP.Data.UserInfoItem"},"EMail":"xyxee.dyffr#homeward.com"}]},
"EmailccWorkflowPersons":{"results":[{"__metadata":{"id":"06582ed9-09 10-4932-9b43-0cfb072942c7","type":"SP.Data.WorkflowPersonsListItem"},"Title":"Assistant Administrator"},{"__metadata":{"id":"13d03566-1703-4550-a21f-08ea286d4940","type":"SP.Data.WorkflowPersonsListItem"},"Title":"Initiator"}]},
"EmailYN":"No",
"EmailSubject":"BSM Request # %%ID%%",
"EmailBody":"<div class=\"ExternalClass645790473F7D4B62BE6224DD7B93990F\">%%IDLINK%%<br><\/div><div class=\"ExternalClass645790473F7D4B62BE6224DD7B93990F\">and the BSM# %%ID%%<br><\/div><div class=\"ExternalClass64 5790473F7D4B62BE6224DD7B93990F\"><br><\/div>"
}]}}
I created another dictionary variable, EmailResults just as the first one to store the multi-value emailcc addresses.
Then the following Get:
Get d/results([%variable: Index%)/Emailcc/results from Variable:responseContent (Output to Variable: EmailccResults)
To get the record count, I use Count Items in the EmailccResults
I set my second index to start at zero and loop through the number baseed on the count in EmailccResults.
To set my intermediate email address (getting one value at a time from the mult-value People picker).
Get d/results([%variable: Index%)/Emailcc/results(%Variable: Index2%)/EMail from Variable: responseContent (Output to Variable: EmailCc)
then I increment the Index2 variable and go to the next record. This works perfectly.
Now my problem is I have a multi-value Lookup that is included in this query (see what the results are above). I attempt the same logic and I am successfully getting the count, but not the Title fields.
My get is:
Get d/results([%variable: Index%)/EmailccWorkflowPersons/results from Variable:responseContent (Output to Variable: EmailccResults)
My actual assignment is:
Get d/results([%variable: Index%)/EmailccWorkflowPersons/results(%Variable: Index2%)/Title from Variable: responseContent (Output to Variable: tmpvar)
** the Lookup works exactly the same way. My problem was that my get above had some blank lines in the text box.

Is it possible to query multiple AWS Cloudsearch fields for the same value without repeating?

Using AWS Cloudsearch, I need to query 2 separate fields for the same value using a structured (compound) query e.g.
(and (or name:'john smith') (or curr_addr:'123 someplace' other_addr:'123 someplace'))
This query works, but I'm wondering if it's necessary to repeat the value for each field that I want to search against. Is there some way to specify the value only once e.g. curr_addr+other_addr:'123 someplace'
That is the correct way to structure your compound query. From the AWS documentation, you'll see that they structure their example query the same way:
(and title:'star' (or actors:'Harrison Ford' actors:'William Shatner')(not actors:'Zachary Quinto'))
From Constructing Compound Queries
You may be able to get around this by listing the more repetitive fields in the query options (q.options), and then specify the field for the rest of the fields. The fields list is sort of a fallback for when you don't specify which field you are searching in a compound query. So if you list the address fields there, and then only specify the name field in your query, you may get close to the behavior you're looking for.
Query options
q.options={fields: ['curr_addr','other_addr']}
Query
(and (or name:'john smith') (or '123 someplace'))
But this approach would only work for one set of repetitive fields, so it's not a silver bullet by any means.
From Search API Reference (see q.options => fields)

Updating ElasticSearch mappings field type with existing data

I'm storing a few fields and for the sake of simplicity lets call the field in question 'age'. Initially ES created the index for me and it ended up choosing the wrong field type for 'age'. It's a string type right now instead of a numeric type. I'm aware that, I should have defined the mappings myself to begin with and force the data values been sent to be consistently all strings or numeric values.
What I've right now is an index with a ton of data that uses a 'string' type for age with following values: 1, 10, 'na', etc..
Now my question is: if I were to change the mapping from string to integer, would indexing have any issues with the existing data values such as 'na' when being updated ??
I just wanted to ask first before I start creating a playground environment to test with a sample data set.
What you can update according to the doc:
new properties can be added to Object datatype fields.
new multi-fields can be added to existing fields.
doc_values can be disabled, but not enabled.
the ignore_above parameter can be updated.
Otherwise I am afraid you will have to create a new mapping and reindex your data, see this post for example

Reading Sharepoint List Calculated field from PowerShell

I am trying to pull data from a SharePoint list. The field is a calculated column that takes a yes or no answer and changes the words to archived and non-archived.
I can see the data being formatted correctly in the calculated column in IE but when I try to pull the data it shows up as nothing when I check the variable data.
$site = get-spsite https://extranet./sites/site
$web = get-spweb -Identity https://extranet/sites/site
$list=$web.getlist("https://extranet/sites/site/lists/List");
$View = $list.Views["LISTVIEW"]
$listitems = $list.Getitems($view)
foreach ($listitem in $listitems) {
I have tried this also but get an indexing a null variable error.
$mailboxdb = $listitem.Fields["mailboxdb"] -as [Microsoft.SharePoint.SPFieldCalculated];
$mailboxdb.GetFieldValueAsText($listitem["mailboxdb"]);
I see this also in the $listitems output. ows_MailboxDb='string;#Archived'
But when I check $mailboxdb its empty.
Found this but I don't know what it means by stored results.
In Powershell, although you can reference any field in the list in your script, you can only compare retrieve values from "static" fields - that is, you cannot use calculation fields. PowerShell will not complain - but you will not get results in your script. This is because the .Net library for Sharepoint will not do the field calculation for you - that only happens inside the Sharepoint UI itself.
If you need to have access to a "calculated" field, you actually need to have two fields - the calculated field (usually hidden) and a "stored result" field, which must be updated from the calculated value in the last step of the "Update" workflow. Then you can use the "stored value" field in PowerShell - and also, incidentally, in View calculations in Sharepoint.
You basically have two options here. You can have Powershell do the calculation for you, which is probably the simpler of the two options given the basic nature of your calculation.
The second option, as mentioned at the end of your post is to create a new field which can store the result of the calculation. In your case, you could call it status. Then you would create a workflow that runs whenever a list item is updated or created that stores the results of the calculated field in the results field. This seems redundant to me if you have this field for no other reason than to use the value of the calculated field in a PowerShell script.

Dynamics AX 2012: Translate RecId into a Value

I asked a question similar to this previously (How to use RecId as a foreign key in a form) but would like to explore it a bit further in a more complex scenario.
Replacement keys work great when you have indexes set up and allow duplicates set to no, but they don't seem to work at all with multiple-field indexes or when allow duplicates is set to yes.
Is there way, programmatically, to replace a foreign key in a grid with a translated value without using replacement keys? I tried writing a display method to override the field, but some odd behavior resulted--fields moving around in the grid, and the display method being unaware of which row to use, thus all values in the entire column were the same.
Table A: Bob:1, Sally:1, Sue:3
Table B: 1:Apples, 2:Apples, 3:Oranges
The "people" are tied to their favorite "foods" by the food RecId, refererenced in the People table. Assume there is additional data in other columns that make these records unique, so consolidating "1:Apples" and "2:Apples" is not possible.
It seems there should be a way to write a display method to overwrite a field value in a grid. Any suggestions? Sample code?
Thanks
Firstly, surrogate FK replacement does (or at least should) work with composite keys (e.g., {First Name, Last Name}).
Secondly, you state that there is "additonal data in other columns" that make these records unique...Then why aren't these columns being combined with the food's name to form an alternate key? The data model seems incorrect (or at least some metadata isn't being made consistent with the conditions you've stated)
Thirdly, any Field Group can be chosen as the ReplacementFieldGroup on a Reference Group control. That alone will allow you to do basically whatever you want. That said, I would strongly encourage you to use an alternate key as your replacement field group whenever possible due to the semantics of surrogate FK replacement.
Flow:
1) User types a value(s) into reference group.
2) User's tabs out.
3) User's typed value(s) are used to look up a record in the related table.
4a) If the user's typed in value(s) are uniquely mapped to a record that record is chosen, else,
4b) If the user's typed in values are not unique a lookup is presented to allow the user to pick which record they "meant". Note that the lookup must therefore present a collection of uniquely identifiable records so that the user knows which record to pick (if the records all look the same in the lookup then they'll have no idea what in the hell they should pick.)
5) Upon successful resolution of the typed values, the record is set back on the source form.
Given this flow, it is obvious that steps 3-5 will be broken if there is no unique index (key) on the table. (How is the user supposed to specify a unique reference to the record if the record has no means of being uniquely identified (assuming you don't want to display RecId to the user)???)
In the exceptional case that you decide that you still want to use a non-unique index as your replacement field group you must implement resolveReference and lookupReference to provide the user a unique resolution/lookup experience (to handle steps 3-5 in the above flow). Note: The common use case for this is wanting to effectively eliminate non-selective fields from being displayed in Reference Group and instead letting some outer context or mode implicitly set that value. E.g., if the alternate key was {Size, Color}, one could potentially make "Color" a global form context--perhaps by having the user pick a color at the top of the form--and only have the user enter Size into Reference Group...The Color could then be implicitly added back via the resolveReference and lookupReference overrides.