Google Admin SDK: How to modify the feature instances of a Calendar Resource - google-admin-sdk

Source: https://developers.google.com/admin-sdk/directory/v1/reference/resources/calendars
When fetching a resource calendar the structure of featureInstances is defined as list of objects but this is not marked as writable. Instead there is a writable property of the same name defined as a string with description "Instances of features for the calendar resource."
What is the format of this string?

You can look at the documentation for the Resources.features object. And you can insert this feature through this endpoint.
So what is the difference between the string and the array of objects?
Basically it would depend on the number of features, if you only have one it will be there just as an string, and if you have multiple features, then the field is a list of objects containing all different features.
From the documentation:
featureInstances string Instances of features for the calendar resources.
featureInstances[].feature nested object The feature that this is an instance of. A calendar resource may have multiple instances of a feature.

The payload for API should be an Array<{ feature: { name: string } }:
"featureInstances": [
{
"feature": {
"name": "Projector"
}
},
{
"feature": {
"name": "Jamboard"
}
}
]

Related

Creating Batch Operations with AWS Amplify [GraphQL, DataStore, AppSync]

I've currently been handling batch operations with a for loop, but obviously, this is not the best approach, especially as I'm adding an 'upload by CSV' option, which will take 1000+ putItems.
I searched around for the best ways to implement this, specifically this link:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
However, even after following those steps mentioned I'm not able to achieve a batch operation. Below is my code for a 'batch delete' operation.
Here is my schema.graphql file:
type Client #model #auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
company: String
phone: String
email: String
}
type Mutation {
batchDelete(ids: [ID]): [Client]
}
I then create two new files. One request mapping template and one response mapping template.
#set($clientsdata = [])
#foreach($item in ${ctx.args.clients})
$util.qr($clientsdata.delete($util.dynamodb.toMapValues($item)))
#end
{
"version" : "2018-05-29",
"operation" : "BatchDeleteItem",
"tables" : {
"Clients": $utils.toJson($clientsdata)
}
}
and then as per the tutorial a "simple pass through" response mapping template:
$util.toJson($ctx.result.data.Posts)
However now when I run the batchdelete command, I keep getting nothing returned.
Would really appreciate guidance on this!
When it comes to performing DynamoDB batch operations in tandem with Amplify, note that the table name specified in the schema is actually different per environment, i.e. your "Client" table wouldn't be recognized as "Clients" as you have stated it in the request mapping template, but rather the name it is given on Amplify push, per environment.
E.g. Client-<some alphanumeric number>-envName
Add the full name of the table to your request and response mapping templates.
Also your foreach statement should read:
#foreach($item in ${ctx.args.clientsdata}) wherein you iterate through each of the items in the array that is passed as the argument to the context object.
Hope this helps.

AWS Kendra PreHook Lambdas for Data Enrichment

I am working on a POC using Kendra and Salesforce. The connector allows me to connect to my Salesforce Org and index knowledge articles. I have been able to set this up and it is currently working as expected.
There are a few custom fields and data points I want to bring over to help enrich the data even more. One of these is an additional answer / body that will contain key information for the searching.
This field in my data source is rich text containing HTML and is often larger than 2048 characters, a limit that seems to be imposed in a String data field within Kendra.
I came across two hooks that are built in for Pre and Post data enrichment. My thought here is that I can use the pre hook to strip HTML tags and truncate the field before it gets stored in the index.
Hook Reference: https://docs.aws.amazon.com/kendra/latest/dg/API_CustomDocumentEnrichmentConfiguration.html
Current Setup:
I have added a new field to the index called sf_answer_preview. I then mapped this field in the data source to the rich text field in the Salesforce org.
If I run this as is, it will index about 200 of the 1,000 articles and give an error that the remaining articles exceed the 2048 character limit in that field, hence why I am trying to set up the enrichment.
I set up the above enrichment on my data source. I specified a lambda to use in the pre-extraction, as well as no additional filtering, so run this on every article. I am not 100% certain what the S3 bucket is for since I am using a data source, but it appears to be needed so I have added that as well.
For my lambda, I create the following:
exports.handler = async (event) => {
// Debug
console.log(JSON.stringify(event))
// Vars
const s3Bucket = event.s3Bucket;
const s3ObjectKey = event.s3ObjectKey;
const meta = event.metadata;
// Answer
const answer = meta.attributes.find(o => o.name === 'sf_answer_preview');
// Remove HTML Tags
const removeTags = (str) => {
if ((str===null) || (str===''))
return false;
else
str = str.toString();
return str.replace( /(<([^>]+)>)/ig, '');
}
// Truncate
const truncate = (input) => input.length > 2000 ? `${input.substring(0, 2000)}...` : input;
let result = truncate(removeTags(answer.value.stringValue));
// Response
const response = {
"version" : "v0",
"s3ObjectKey": s3ObjectKey,
"metadataUpdates": [
{"name":"sf_answer_preview", "value":{"stringValue":result}}
]
}
// Debug
console.log(response)
// Response
return response
};
Based on the contract for the lambda described here, it appears pretty straight forward. I access the event, find the field in the data called sf_answer_preview (the rich text field from Salesforce) and I strip and truncate the value to 2,000 characters.
For the response, I am telling it to update that field to the new formatted answer so that it complies with the field limits.
When I log the data in the lambda, the pre-extraction event details are as follows:
{
"s3Bucket": "kendrasfdev",
"s3ObjectKey": "pre-extraction/********/22736e62-c65e-4334-af60-8c925ef62034/https://*********.my.salesforce.com/ka1d0000000wkgVAAQ",
"metadata": {
"attributes": [
{
"name": "_document_title",
"value": {
"stringValue": "What majors are under the Exploratory track of Health and Life Sciences?"
}
},
{
"name": "sf_answer_preview",
"value": {
"stringValue": "A complete list of majors affiliated with the Exploratory Health and Life Sciences track is available online. This track allows you to explore a variety of majors related to the health and life science professions. For more information, please visit the Exploratory program description. "
}
},
{
"name": "_data_source_sync_job_execution_id",
"value": {
"stringValue": "0fbfb959-7206-4151-a2b7-fce761a46241"
}
},
]
}
}
The Problem:
When this runs, I am still getting the same field limit error that the content exceeds the character limit. When I run the lambda on the raw data, it strips and truncates it as expected. I am thinking that the response in the lambda for some reason isn't setting the field value to the new content correctly and still trying to use the data directly from Salesforce, thus throwing the error.
Has anyone set up lambdas for Kendra before that might know what I am doing wrong? This seems pretty common to be able to do things like strip PII information before it gets indexed, so I must be slightly off on my setup somewhere.
Any thoughts?
since you are still passing the rich text as a metadata filed of a document, the character limit still applies so the document would fail at validation step of the API call and would not reach the enrichment step. A work around is to somehow append those rich text fields to the body of the document so that your lambda can access it there. But if those fields are auto generated for your documents from your data sources, that might not be easy.

What are the extra values added to DynamoDB streams and how do I remove them?

I am using DynamoDB streams to sync data to Elasticsearch using Lambda
The format of the data (from https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.Tutorial.html) looks like:
"NewImage": {
"Timestamp": {
"S": "2016-11-18:12:09:36"
},
"Message": {
"S": "This is a bark from the Woofer social network"
},
"Username": {
"S": "John Doe"
}
},
So two questions.
What is the "S" that the stream attaches. I am assuming it is to indicate string or stream, but I can't find any documentation.
Is there an option to exclude this from the stream or do I have to write code in my lambda function to remove it?
What you are seeing is the DynamoDB Data Type Descriptors. This is how data is stored in DynamoDB (or at least how it is exposed via the low level APIs). There are SDKs is various languages that will convert this to JSON.
For Python: https://boto3.amazonaws.com/v1/documentation/api/latest/_modules/boto3/dynamodb/types.html
'TypeSerializer'
deserializer = boto3.dynamodb.types.TypeDeserializer()
dic = {key: deserializer.deserialize(val) for key,val in record['dynamodb']['NewImage'].items()}
def decimal_default(obj):
if isinstance(obj, decimal.Decimal):
return float(obj)
raise TypeError
json.dumps(dic, default=decimal_default)
If you want to index in elasticsearch you have to do another json.loads() to convert to a Python dictionary.
The S indicates that the value of the attribute is simply a scalar string (S) attribute type. Each DynamoDB item attribute's key name is always a string though the attribute value doesn't have to be a scalar string. 'Naming Rules and Data Types' details each attribute data type. A string is a scalar type which is different than a document type or a set type.
There are different views of a stream record however there is no stream view that omits the item's attribute value code and also provides the attribute value. Each possible StreamViewType is explained in 'Capturing Table Activity with DynamoDB streams'.
Have fun!

Dynamodb restrict access to user data only

how do I restrict dynamoddb user access to its own data owned by them.
I came access Using IAM Policy Conditions for Fine-Grained Access Control
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${www.amazon.com:user_id}"
],
"dynamodb:Attributes": [
"UserId",
"GameTitle",
"Wins",
"Losses",
"TopScore",
"TopScoreDateTime"
]
}
}
problem with above condition is that my partition key is not the userId.
its something else.
here is what my DB looks like like
Hash : "Sales" # its just plain text Sales
Range Date # its date
attributes : Sale : [ # array of maps
{
name : abc,
userId : idabc,
some-other : stuff
},
{
name : xyz,
userId : idxyz,
some-other : stuff
}
]
any idea how to restrict access based on sale[x].userId ?
or any better design how to handle this kind of design ?
I use Date range to query 90% of the data.
other option is to use different table for each and every logical table.
like sales,expense,payroll etc but I don't want to create different tables
and it defeats the purpose or NoSQL I guess.
FYI I am using javascript sdk to access dynamodb from browser.
app has 3 different user types
customer (access to its own data)
merchants (its own data and access to its customer data)
admin (access to all the data)
I think for this I have to create 3 different userPools, correct me if I am wrong.
but cant restrict access to own data, if I use partition key as userId
then querying for merchants becomes difficult.
any suggestion on how do I handle this db design?
thanx

Choose language using views services in drupal 7

I use views services module with rest services. The views shows content using "current user's language" but when I get content always returns in the default language.
For example:
http://example.com/api1_rest/views/content_view?id_display=page&limit=10&offset=0
Returns
[
{
"vid":"300",
"uid":"4",
"title":"node title",
"log":"",
"status":"1",
"comment":"0",
"promote":"0",
"sticky":"0",
"nid":"2488",
"type":"news",
"language":"en",
"revision_timestamp":"1422900078",
"revision_uid":"1",
"body":{
"en":[
{
"value":"content body here",
"summary":"",
"format":"4"
}
]
},
}
]
I need choose language in rest petition.
From Services Views module page:
You can create exposed filters and pass them to your resource. For example if we created exposed filter "tags" call will be:
http://example.com//?tags=7
So you can create a exposed filter for language in your view and than just filter results by adding &lang=en to the url:
http://example.com/api1_rest/views/content_view?id_display=page&limit=10&offset=0&lang=en