AWS Lambda to check DynamoDB KMS encryption - amazon-web-services

I want to write a Lambda function to catch any table in DynamoDB which is not using KMS encryption. I am planning to do the following:
Create a SNS topic
Write a Lambda function
Trigger the Lambda function from CloudWatch event: CreateTable
My question is: in the JSON document if KMS is not being used then the following lines will not be in the event details
"sSEDescription": {
"sSEType": "KMS",
"kMSMasterKeyArn": "",
"status": "ENABLED"
},
So in my python code should I check for sSEDescription is NULL or is there a better way?
Appreciate any input to make my code better.

Related

How Do I Enable Object-Level Logging for an S3 Bucket using boto3

I'm trying to create an amazon cloudWatch rule which triggers whenever an object is uploaded into a bucket. I know that to do this I need to trigger on the PutObject Event, however best I can tell that requires enabling object level logging on the bucket. I will be using a multitude of buckets and want to be able to automate that process, and because of how most of the system is set up using boto3 seems to make the most sense. So how can I turn object-level logging on using boto3?
The only AWS official resource I've been able to find so far is: How Do I Enable Object-Level Logging for an S3 Bucket with AWS CloudTrail Data Events?
Which explains how to enable object level logging through the GUI.
I've also looked through the boto3 library documentation
Both have ultimately not been helpful based on my understanding.
My chief goal is to enable object-level logging through boto3, if that's something that can be done.
You can configure an Amazon S3 Event so that, when a new object is created, it can:
Trigger an AWS Lambda function
Put a message in an Amazon SQS queue
Send a message to an Amazon SNS topic
See: Configuring Amazon S3 Event Notifications
You can use the put_event_selectors() function in CloudTrail service.
client = boto3.client('s3')
client.put_event_selectors(
TrailName='TrailName',
EventSelectors=[
{
'ReadWriteType': 'All',
'IncludeManagementEvents': True,
'DataResources': [
{
'Type': 'AWS::S3::Object',
'Values': [
'arn:aws:s3:::your_bucket_name/',
]
},
]
},
])

Can I get the lambda function trigger information using aws cli?

I am working with a serverless project and I have only the access to aws cli, so I want to get the trigger information of a function such as event and since I am using a sns topic to trigger the function, I want to get the topic infomation and arn, I tried diffrent options, such as,
list-event-source-mapping - which returns a empty array
get-function: which doesn't hold that value
Do I have means to get the trigger information of a function with aws cli?
In this case, I believe the only way to get that information would be from the get-policy API call as that will contain the resource based policy(AKA trigger) which allows the other service to invoke the Lambda.
The get-event-source-mappings API returns the stream based event sources in the region such as:
Kinesis
Dynamo
SQS
So for example, if I have a lambda function which is configured to be invoked from SNS then the policy returned would be similar to:
aws lambda get-policy --function-name arn:aws:lambda:us-east-1:111122223333:function:YOUR_LAMBDA_NAME_HERE --query Policy --output text | jq '.Statement[0].Condition.ArnLike["AWS:SourceArn"]'
OUTPUT:
"arn:aws:sns:REGION:111122223333:TOPIC_NAME"
Though that assumes that the policy in the Lambda function only has that one statement but if you know the specific statement id then you should be able to select it in jq using a filter

Unable to Invoke AWS Lambda Function in Amazon Connect Contact Flow

I am trying to Integrate AWS Lambda function in Amazon Connect Contact flow. The AWS Lambda function is working fine and giving a response. While invoking the function in the Connect contact flow, it is returning error statement but I am unable to find out what is the error and where the error log is storing.
I am trying to get the user's phone number to the Amazon Connect and then I would like to check whether the phone number already exists in the DynamoDB or not. For this, I am writing the lambda function and trying to invoke it from Amazon Connect
const AWS=require('aws-sdk');
const doClient=new AWS.DynamoDB.DocumentClient({region: 'us-east-1'});
exports.handler = function(event, context, callback) {
var params={
TableName:'testdata',
Key: {
Address: event.Details.ContactData.CustomerEndpoint.Address
}
};
doClient.get(params,function(err,data){
if(err)
{
callback(err,null);
}
else
{
callback(null,data);
}
});
}
First, you need to make sure permissions have been granted properly. From AWS CLI, issue the following command with the following edits.
Replace function "Lambda_Function_Name" with the actual name of your Lambda function.
Replace the source-account "111122223333" with your AWS account number
Replace the source-arn string with the arn string of your Amazon Connect instance.
aws lambda add-permission --function-name function:Lambda_Function_Name --statement-id 1 --principal connect.amazonaws.com --action lambda:InvokeFunction --source-account 111122223333 --source-arn arn:aws:connect:us-east-1:111122223333:instance/444555a7-abcd-4567-a555-654327abc87
Once your permissions are setup correctly, Amazon Connect should be able to access Lambda. You must, however, ensure that your Lambda function returns a properly formatted response. The output returned from the function must be a flat object of key/value pairs, with values that include only alphanumeric, dash, and underscore characters. Nested and complex objects are not supported. The size of the returned data must be less than 32 Kb of UTF-8 data.
Even with logging enabled on your call flow, Amazon Connect doesn't provide very detailed information about why a Lambda function fails. I would recommend hard coding a simple response in your Lambda function such as the following node.js response to ensure your Lambda response format isn't causing your issue and then work from there.
callback(null, {test : "Here is a valid response"});
When you are using the "Invoke AWS Lambda function" step, you do not need to pass the phone number to Lambda as a separate parameter as your image shows. Amazon Connect already passes a JSON object to Lambda that contains that information. Below is a sample of what Amazon Connect sends to Lambda.
{
"Details": {
"ContactData": {
"Attributes": {
"Call_Center": "0"
},
"Channel": "VOICE",
"ContactId": "",
"CustomerEndpoint": {
"Address": "+13215551212",
"Type": "TELEPHONE_NUMBER"
},
"InitialContactId": "",
"InitiationMethod": "INBOUND",
"InstanceARN": "",
"PreviousContactId": "",
"Queue": null,
"SystemEndpoint": {
"Address": "+18005551212",
"Type": "TELEPHONE_NUMBER"
}
}
},
"Name": "ContactFlowEvent"
}
You can use the following in your Lambda function to reference the calling number to lookup in your DynamoDB.
var CallingNumber = event.Details.ContactData.CustomerEndpoint.Address;
Hope this helps.

Passing payload through AWS S3/Lambda Trigger

I am new to the AWS platform. I have invoked a lambda function through AWS CLI.
aws lambda invoke --function-name CFT ... --payload file://${DATA_TMP}/affl_ftp_config.json ${DATA_LOG}/outfile.txt
Here, The Payload is a json file
{
"s3_bucket": "fanatics.dev.internal.confidential",
....
"date": "20160813"
}
This json file is being used as part of event object in my lambda handler.
Is it possible to have this behavior configured when a S3 file is uploaded and it automatically triggers a Lambda function?
For e.g.,
I upload a file in a S3_bucket that will trigger a lambda function with the json payload shown above.
No, you can't.
The Lambda function triggered by an S3 upload provides information about the new object (region, bucket, key, version-id if the bucket is versioned) but does not provide the object payload.
See the documented S3 Event Message Structure. This is what a Lambda function invoked by S3 will receive.
So, the Lambda function invoked by the S3 event must then fetch the object from S3 in order to access the payload.
So, either your existing lambda function will need to be modified, or you'll need a new lambda function to respond to the event, fetch the payload, and then call the original function.
Note also that if these events are triggered by overwrites of existing objects, then you will want versioning enabled on your bucket and you'll want to use GetObjectVersion to fetch the payload with the explicit versionId in the event, because GetObject (without specifying the version) may return stale data on overwrites.
Yes you can. S3 is one of the Lambda triggers. Please read more details here

AWS IoT Thing can't trigger AWS Lambda function?

I set up my Lambda function according to the AWS guides by setting a trigger in the setup stage. (the guide except that the guide is using IoT button and I'm using a rule)
It sets up the trigger rule in the AWS IoT console for me. The thing is setup with a certificate and an "iot:*" policy which gives it full IoT access.
The thing is continuously sending messages to the cloud under a certain topic. The messages can be received if I subscribe to it in the AWS IoT Test console.
My lambda function gets triggered if I publish something under that topic from the AWS IoT Test console.
But the function doesn't trigger from the continuous messages sent by the thing. It only triggers from the IoT Test console.
I didn't add any other policy under certificates for the thing in relation to this trigger. Do I have to do so? What should it be?
I tried changing my topic SQL to SELECT * FROM '*'
Try to change your SQL to SELECT * FROM '#'. With # you get every published topic. When you use *, then you don't get topics e.g. sample/newTopic.
With this SQL statement the Lambdas Function gets invoked for every incoming message. When the AWS IoT Console shows the message and your Lambda Function doesn't do anything, try to look if Lambda did a log in CloudWatch.
If your AWS IoT Thing can't trigger AWS Lambda function, you may have a JSON mapping issue and also to improve your SQL query. In my case, I used the following code to provide Lambda a clean input:
SELECT message.reported.* from "#"
With JSON mapping:
{
"desired": {
"light": "green",
"Temperature": "55",
"timestamp": 1526323886
},
"reported": {
"light": "blue",
"Temperature": "55",
"timestamp": 1526323886
},
"delta": {
"light": "green"
}
}
Then you analyze CloudWatch logs:
Then, check your AWS IoT Console for shadow updates (green below - "Atualizações de sombra") and also Publications (orange)
So, your solution will look like this:
For full details of an end-to-end implementation of AWS IoT using Lambda, please access:
IoT Project - CPU Temperature from Ubuntu to AWS IoT