I am trying to Integrate AWS Lambda function in Amazon Connect Contact flow. The AWS Lambda function is working fine and giving a response. While invoking the function in the Connect contact flow, it is returning error statement but I am unable to find out what is the error and where the error log is storing.
I am trying to get the user's phone number to the Amazon Connect and then I would like to check whether the phone number already exists in the DynamoDB or not. For this, I am writing the lambda function and trying to invoke it from Amazon Connect
const AWS=require('aws-sdk');
const doClient=new AWS.DynamoDB.DocumentClient({region: 'us-east-1'});
exports.handler = function(event, context, callback) {
var params={
TableName:'testdata',
Key: {
Address: event.Details.ContactData.CustomerEndpoint.Address
}
};
doClient.get(params,function(err,data){
if(err)
{
callback(err,null);
}
else
{
callback(null,data);
}
});
}
First, you need to make sure permissions have been granted properly. From AWS CLI, issue the following command with the following edits.
Replace function "Lambda_Function_Name" with the actual name of your Lambda function.
Replace the source-account "111122223333" with your AWS account number
Replace the source-arn string with the arn string of your Amazon Connect instance.
aws lambda add-permission --function-name function:Lambda_Function_Name --statement-id 1 --principal connect.amazonaws.com --action lambda:InvokeFunction --source-account 111122223333 --source-arn arn:aws:connect:us-east-1:111122223333:instance/444555a7-abcd-4567-a555-654327abc87
Once your permissions are setup correctly, Amazon Connect should be able to access Lambda. You must, however, ensure that your Lambda function returns a properly formatted response. The output returned from the function must be a flat object of key/value pairs, with values that include only alphanumeric, dash, and underscore characters. Nested and complex objects are not supported. The size of the returned data must be less than 32 Kb of UTF-8 data.
Even with logging enabled on your call flow, Amazon Connect doesn't provide very detailed information about why a Lambda function fails. I would recommend hard coding a simple response in your Lambda function such as the following node.js response to ensure your Lambda response format isn't causing your issue and then work from there.
callback(null, {test : "Here is a valid response"});
When you are using the "Invoke AWS Lambda function" step, you do not need to pass the phone number to Lambda as a separate parameter as your image shows. Amazon Connect already passes a JSON object to Lambda that contains that information. Below is a sample of what Amazon Connect sends to Lambda.
{
"Details": {
"ContactData": {
"Attributes": {
"Call_Center": "0"
},
"Channel": "VOICE",
"ContactId": "",
"CustomerEndpoint": {
"Address": "+13215551212",
"Type": "TELEPHONE_NUMBER"
},
"InitialContactId": "",
"InitiationMethod": "INBOUND",
"InstanceARN": "",
"PreviousContactId": "",
"Queue": null,
"SystemEndpoint": {
"Address": "+18005551212",
"Type": "TELEPHONE_NUMBER"
}
}
},
"Name": "ContactFlowEvent"
}
You can use the following in your Lambda function to reference the calling number to lookup in your DynamoDB.
var CallingNumber = event.Details.ContactData.CustomerEndpoint.Address;
Hope this helps.
Related
I am trying to create an AWS Cloud watch event which will trigger an email whenever a S3 bucket is created or modified to allow public access.
I have created the cloud trail, log stream and am tracking all the S3 events logs. When i am trying to create a custom event by giving the pattern to detect S3 buckets with public access i am not able to fetch any response or the event doesn't get triggered even if i create bucket with public access. Can you help me out with the custom pattern for the same ?
I have tried giving GetPublicAccessBlock, PutPublicAccessBlock etc in event type but no luck. Please suggest accordingly.
you need to do the following in order to receive a notification
Enable CloudTrail for management events
Create an EventBridge Rule with an event pattern
AWS events or EventBridge partner events
Use Pattern from AWS Service, Simple Storage Service(S3) and Event Type as "AWS API Call via CloudTrail"
Note: This only works if you are turning off for an existing bucket (not for a new bucket)
The reason being when we create a bucket with public access, there are only two events generated, which are CreateBucket and PutBucketEncryption and they don't seem to have information regarding public access being turned on. However if we create a bucket with no public access then it generates an additional PutBucketPublicAccessBlock event with CreateBucket and PutBucketEncryption.
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutBucketPublicAccessBlock", "DeleteBucketPublicAccessBlock"],
"requestParameters": {
"PublicAccessBlockConfiguration": {
"$or": [{
"RestrictPublicBuckets": [false]
}, {
"BlockPublicPolicy": [false]
}, {
"BlockPublicAcls": [false]
}, {
"IgnorePublicAcls": [false]
}]
}
}
}
}
I want to write a Lambda function to catch any table in DynamoDB which is not using KMS encryption. I am planning to do the following:
Create a SNS topic
Write a Lambda function
Trigger the Lambda function from CloudWatch event: CreateTable
My question is: in the JSON document if KMS is not being used then the following lines will not be in the event details
"sSEDescription": {
"sSEType": "KMS",
"kMSMasterKeyArn": "",
"status": "ENABLED"
},
So in my python code should I check for sSEDescription is NULL or is there a better way?
Appreciate any input to make my code better.
I have created a workflow like this:
Use requests for an instance creation through a API Gateway endpoint
The gateway invokes a lamda function that executes the following code
Generate a RDP with the public dns to give it to the user so that they can connect.
ec2 = boto3.resource('ec2', region_name='us-east-1')
instances = ec2.create_instances(...)
instance = instances[0]
time.sleep(3)
instance.load()
return instance.public_dns_name
The problem with this approach is that the user has to wait almost 2 minutes before they were able to login successfully. I'm totally okay to let the lamda run for that time by adding the following code:
instance.wait_until_running()
But unfortunately the API gateway has a 29 seconds timeout for lambda integration. So even if I'm willing to spend it wouldn't work. What's the easiest way to overcome this?
My approach to accomplish your scenario could be Cloudwatch Event Rule.
The lambda function after Instance creation must store a kind of relation between the instance and user, something like this:
Proposed table:
The table structure is up to you, but these are the most important columns.
------------------------------
| Instance_id | User_Id |
------------------------------
Creates a CloudWatch Event Rule to execute a Lambda function.
Firstly, pick Event Type: EC2 Instance State-change Notification then select Specific state(s): Running:
Secondly, pick the target: Lambda function:
Lambda Function to send email to the user.
That Lambda function will receive the InstanceId. With that information, you can find the related User_Id and send the necessary information to the user. You can use the SDK to get information of your EC2 instance, for example, its public_dns_name.
This is an example of the payload that will be sent by Cloudwatch Event Rule notification:
{
"version": "0",
"id": "6a7e8feb-b491-4cf7-a9f1-bf3703467718",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "111122223333",
"time": "2015-12-22T18:43:48Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:instance/i-12345678"
],
"detail": {
"instance-id": "i-12345678",
"state": "running"
}
}
That way, you can send the public_dns_name when your instance is totally in service.
Hope it helps!
Can someone tell me how to get access to AWS credentials in an AWS Lambda function?
I've searched the internet thoroughly but still haven't found anything that has helped me.
Im writing the function in Java. I think I should have access to the credentials with the context object in the HandleRequest method.
If it helps, I want to invoke a DynamoDB client and upload a record to the database.
I came into the same problem myself recently.
This certainly is a blind spot in AWS's Lambda documentation for Java, in my opinion.
This snippet in Java should work for you, assuming you're using the AWS SDK for Java Document API :
DynamoDB dynamodb = new DynamoDB(
new AmazonDynamoDBClient(new EnvironmentVariableCredentialsProvider()));
The main takeaway is to use the EnvironmentVariableCredentialsProvider to access the required credentials to access your other AWS resources within the AWS Lambda container. The Lambda containers are shipped with credentials as environment variables, and this is sufficient in retrieving them.
Note: This creates a DynamoDB instance that only sees resources in the default region. To create one for a specific region, use this (assuming you want to access DynamoDB's in the ap-northeast-1 region):
DynamoDB dynamodb = new DynamoDB(
Regions.getRegion(Regions.AP_NORTHEAST_1).createClient(
AmazonDynamoDBClient.class,
new EnvironmentVariableCredentialsProvider(),
new ClientConfiguration()));
Your Lambda function's permissions are controlled by the IAM Role it executes as. Either add Dynamo PutItem permission to the current role, or create a new role for this purpose.
After giving permissions to the Role, you don't need to write special code to handle credentials, just use the AWS SDK normally. For example:
var AWS = require("aws-sdk");
exports.handler = function(event, context) {
var dynamodb = new AWS.DynamoDB();
var putItemParams = {
"TableName": "Scratch",
"Item": {
"Id": {
"S": "foo"
},
"Text": {
"S": "bar"
}
}
};
dynamodb.putItem(putItemParams, function (err, response) {
if (err) {
context.fail("dynamodb.putItem failed: " + err);
} else {
context.succeed("dynamodb.putItem succeeded");
}
});
};
Is sufficient to put an item in a DynamoDB table, with the correct Role permissions.
Adding to #Gordon Tai's answer, using the current api using AmazonDynamoDBClientBuilder this looks like:
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
.withCredentials(new EnvironmentVariableCredentialsProvider())
.withRegion(Regions.US_EAST_1)
.build();
DynamoDB dynamoDB = new DynamoDB(client);
I have an Amazon Lambda instance and an Amazon SNS instance. The Lambda code watches for changes in our database and I would like it to make a call to Amazon SNS to send pushes to our users. For example:
When a user on one of our forums gets a new message, the Lambda code recognizes this change every time it is run (every 10 minutes) and should send a push to the user's smartphone via SNS.
I'm running into a wall when it comes to the documentation; Amazon's docs only talk about how to trigger Lambda code via SNS, but not the reverse. Does anyone have an example of how I can accomplish this?
There is nothing special about pushing SNS notifications in the context of Lambda. I would think of it as just another external service that you interact with.
What you could do is pull in the AWS SDK in your lambda code and after that use the code to make the SNS calls. You will need to inject the right credentials to be able to call the Amazon SNS API (but you probably do something similar for getting the database endpoint and credentials if you are talking to the database)
Yes, you can use AWS Lambda to achieve what you want. You also need to give proper IAM Permissions allowing your Lambda IAM Role to publish messages to you SNS Topic.
Example SNS Publish IAM Policy:
{
"Statement":[ {
"Effect":"Allow",
"Action":"sns:Publish",
"Resource":"arn:aws:sns:*:<your account id>:<your topic id>"
} ]
}
You can use the lambda below to push an SNS message to a user, but you must know what the endpoint ARN is for that user. For example, if in an Android app, when the user logs in you will have the app send a GCM (Google Cloud Messaging) token to your backend (via an API call that triggers a lambda, for example). Your backend, which is connected to GCM, can then use this token to lookup which endpoint ARN corresponds to such user and put that in the lambda below. Alternatively, you can have the app send the endpoint ARN directly to your backend, though I think it might be a bit less secure. Make sure you give IAM permissions to publish to your app via SNS. You can use the lambda below to push the message:
var AWS = require('aws-sdk');
var sns = new AWS.SNS({apiVersion: '2010-03-31'});
exports.handler = (event, context, callback) => {
console.log(JSON.stringify(event))
var payload = {
"default": "The message string.",
"GCM":"{"+
"\"notification\":{"+
"\"body\":\"PUT NOTIFICATION BODY HERE\","+
"\"title\":\"PUT NOTIFICATION TITLE HERE\""+
"}"+
"}"
};
payload = JSON.stringify(payload);
var params = {
TargetArn: 'PUT THE ENDPOINT ARN HERE',
Subject: 'foo2',
MessageStructure: 'json',
Message: payload
}
sns.publish(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
};