How to map specific record type from source to specific record type in target - informatica

I am trying to find a way to migrate data from Salesforce A to Salesfore B Account data with specific conditions in place.
Source condition is RecordType.Name= 'Customer Account' AND RecordType.Name='Partner Account' and map with the targets RecordType.Name= 'Client' AND RecordType.Name='Partner Data'.
So when source record type is Customer Account then it should get inserted where Target's RecordType.Name='Client' . is there a way to filter data on target ?

Related

AWS Appflow <-> Salesforce integration

I'm trying to setup a workflow to backup Accounts & Contact objects from Salesforce to S3 via AWS Appflow. Perhaps, I'm able to setup the connection and able to backup the files on-demand.
However, for restoration I would like to import the mapping using .csv file and below are sample first 3 lines (using comma-separator source & destination fields).
Name, Name
Type, Account Type
AccountNumber, Account Number
But Appflow is unable to import as " Couldn't parse rows from the file" - Am I missing something ?
This was bug on AWS side and it taken up ! Workaround is to do manual mapping instead of external CSV; make sure the source field attributes match with the corresponding objects in Salesforce.

how do i get inbound caller id info from aws connect and lex in my lambda function

I have connect and lex up and running and can customize my lex chatbot via a lambda that checks the incoming phone number.
I would like to use caller id info to look up customer info for the lambda to use.
How do i get the inbound caller id info ?
Thanks
Use Contact Attributes
Contact attributes let you store customer input or data about a customer, and then use it later in a contact flow.
Contact attributes let you pass data between Amazon Connect and other services, such as Amazon Lex and AWS Lambda. Contact attributes can be both set and consumed by each service. For example, you could use a Lambda function to look up customer information, such as their name or order number, and use contact attributes to store the values returned to Amazon Connect. You could then reference those attributes to include the customer's name in messages using text to speech, or store their order number so they do not have to enter it again.
How to set Contact Attributes
To set a contact attribute with a Set contact attributes block
In Amazon Connect, choose Routing, Contact flows.
Select an existing contact flow, or create a new one.
Add a Set contact attributes block.
Edit the Set contact attributes block, and choose Use text.
For the Destination key, provide a name for the attribute, such as Company. This is the value you use for the Attribute field when using or referencing attributes in other blocks. For the Value, use your company name.
You can also choose to use an existing attribute as the basis for creating the new attribute.
What customer data can you get from System Attributes?
Customer number
Dialed number
Customer callback number
Stored customer input
...and more
For anyone in the future having difficulty.
In Amazon Connect, you can pass contact attributes (Inbound Caller ID) from inside the customer input block where the Lex Bot exists to the Lambda that the bot calls.
Open up the Get customer input block where your Lex Bot gets the user's input and add in a session attribute.
Set the Destination Key to any name ( I set mine to InboundCallerID).
Set Type to System.
Set Attribute to Customer Number.
Now you can access the customer number via the event variable from inside your lambda.
Example:
def lambda_handler(event, context):
phone_number = event['sessionState']['sessionAttributes']['InboundCallerID']

Use AWS Athena To Query S3 Object Tagging

Is it possible to use AWS Athena to query S3 Object Tagging? For example, if I have an S3 layout such as this
bucketName/typeFoo/object1.txt
bucketName/typeFoo/object2.txt
bucketName/typeFoo/object3.txt
bucketName/typeBar/object1.txt
bucketName/typeBar/object2.txt
bucketName/typeBar/object3.txt
And each object has an S3 Object Tag such as this
#For typeFoo/object1.txt and typeBar/object1.txt
id=A
#For typeFoo/object2.txt and typeBar/object2.txt
id=B
#For typeFoo/object3.txt and typeBar/object3.txt
id=C
Then is it possible to run an AWS Athena query to get any object with the associated tag such as this
select * from myAthenaTable where tag.id = 'A'
# returns typeFoo/object1.txt and typeBar/object1.txt
This is just an example and doesn't reflect my actual S3 bucket/object-prefix layout. Feel free to use any layout you wish in your answers/comments.
Ultimately I have a plethora of objects that could be in different buckets and folder paths but they are related to each other and my goal is to tag them so that I can query for a particular id value and get all objects related to that id. The id value would be a GUID and that GUID would map to many different types of objects that are related e.g., I could have a video file, a picture file, a meta-data file, and a json file and I want to get all of those files using their common id value; please feel free to offer suggestions too because I have the ability to structure this as I see fit.
Update - Note
S3 Object Metadata and S3 Object Tagging are two different things.
Athena does not support querying based on s3 tag
one workaround is,
you can create a meta file which contains the tag and file mapping using lambda i.e whenever new file comes to s3 and lambda would update a file in s3 with tag and name details.

Alfresco batch loading service for nodes metadata

In our project we are using Alfresco 4.2-c for content store. We need web service for nodes metadata loading. The required properties that should included in the result are createdOn or modifiedOn. We have the ids of the nodes, which dates should be retrieved. Is there any way of getting this properties for multiple notes in one request, not one by one.
I already tried POST /alfresco/service/api/bulkmetadata, but none of the properties that I need is included in the result.
I also tried to create search request, but it returns only nodeRef ids.
I am aware that there are services to get this information one by one, but I don't want that, because I need the information for over 40K nodes.
Try to enumerate the properties:
http://localhost:8080/alfresco/api/-default-/public/cmis/versions/1.1/browser/?
cmisselector=query
&maxItems=10
&skipCount=0
&succinct=true
&q=
select cmis:objectId, cmis:createdBy, cmis:description
from cmis:document
where
cmis:objectId in (
'8dac37fa-1cd4-4226-85a3-03e8fdb64e16',
'395417a9-4bd1-4dd1-b33c-c4e555abccae'
)

How do I get the S3 key's created date with boto?

Boto's S3 Key object contains last_modified date (available via parse_ts) but the base_field "date" (i.e., ctime) doesn't seem to be accessible, even though it's listed in key.base_fields.
Based on the table at http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html, it does seem that it is always automatically created (and I can't imagine a reason why it wouldn't be). It's probably just a simple matter of finding it somewhere in the object attributes, but I haven't been able to find it so far, although I did find the base_fields attribute which contains 'date'. (They're just a set and don't seem to have an available methods and I haven't been able to find documentation regarding ways to inspect them.)
For example, Amazon S3 maintains object creation date and size metadata and uses this information as part of object management.
Interestingly, create_time (system metadata field "Date" in link above) does not show up in the AWS S3 console, either, although last_modified is visible.
TL;DR: Because overwriting an S3 object is essentially creating a new one, the "last modified" and "creation" timestamp will always be the same.
Answering the old question, just in case others run into the same issue.
Amazon S3 maintains only the last modified date for each object.
For example, the Amazon S3 console shows the Last Modified date in the object Properties pane. When you initially create a new object, this date reflects the date the object is created. If you replace the object, the date changes accordingly. So when we use the term creation date, it is synonymous with the term last modified date.
Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html
i suggest use
key.last_modified since key.date seems to return the last time you viewed the file
so something like this :
key = bucket.get_key(key.name)
print(key.last_modified)
After additional research, it appears that S3 key objects returned from a list() may not include this metadata field!
The Key objects returned by the iterator are obtained by parsing the results of a GET on the bucket, also known as the List Objects request. The XML returned by this request contains only a subset of the information about each key. Certain metadata fields such as Content-Type and user metadata are not available in the XML. Therefore, if you want these additional metadata fields you will have to do a HEAD request on the Key in the bucket. (docs)
In other words, looping through keys:
for key in conn.get_bucket(bucket_name).list():
print (key.date)
... does not return the complete key with creation date and some other system metadata. (For example, it's also missing ACL data).
Instead, to retrieve the complete key metadata, use this method:
key = bucket.get_key(key.name)
print (key.date)
This necessitates an additional HTTP request as the docs clearly state above. (See also my original issue report.)
Additional code details:
import boto
# get connection
conn = boto.connect_s3()
# get first bucket
bucket = conn.get_all_buckets()[0]
# get first key in first bucket
key = list(bucket.list())[0]
# get create date if available
print (getattr(key, "date", False))
# (False)
# access key via bucket.get_key instead:
k = bucket.get_key(key.name)
# check again for create_date
getattr(k, "date", False)
# 'Sat, 03 Jan 2015 22:08:13 GMT'
# Wait, that's the current UTC time..?
# Also print last_modified...
print (k.last_modified)
# 'Fri, 26 Apr 2013 02:41:30 GMT'
If you have versioning enabled for your S3 bucket, you can use list_object_versions and find the smallest date for the object you're looking for which should be the date it was created