Not able to add data into DynamoDB using API gateway POST method - amazon-web-services

I made a Serverless API backend on AWS console which uses API Gateway, DynamoDB, Lambda functions.
Upon creation I can add the data in dynamoDB online by adding a JSON file, which looks like this:
{
"id": "4",
"k": "key1",
"v": "value1"
}
But when I try to add this using "Postman", by adding the above JSON data in the body of POST message, I get a Positive return (i.e. no errors) but only the "id" field is added in the database and not the "k" or "v".
What is missing?

I think that you need to check on your Lambda function.
As you are using Postman to do the API calls, received event's body will be as follows:
{'resource':
...
}, 'body': '{\n\t"id": 1,\n\t"name": "ben"\n
}', 'isBase64Encoded': False
}
As you can see:
'body': '{\n\t"id": 1,\n\t"name": "ben"\n}'
For example, I will use Python 3 for this case, what I need to do is to load the body into JSON format then we are able to use it.
result = json.loads(event['body'])
id = result['id']
name = result['name']
Then update them into DynamoDB:
item = table.put_item(
Item={
'id': str(id),
'name': str(name)
}
)

Related

Accessing subitems in api gateway response mapping

I'm trying to do a mapping in an api gateway and I can't manage to access the children objects inside the returned json. This is my case:
When I test the endpoint directly in the api gateway I get this response:
{
"status": "FAIL",
"output": {
"errorCode": "my code",
"message": "my message"
}
}
And the api gateway integration response mapping is as follows:
#set($inputRoot = $input.path("$.output"))
$inputRoot
But I just want to return the json inside the output key, so I tried the following:
#set($inputRoot = $input.path("$.output"))
$inputRoot.output
And when I run it a get no data.
Before the transformation, the return value is
{
"output":"{\"status\":\"FAIL\",\"output\":{\"errorCode\":\"my code\",\"message\":\"my message\"}}"
}
I think that the fact that is returned as string might have something to do, but I've tried with $util.parseJson and $util.escapeJavaScript and I had no luck.
Does anyone know how can I solve this? I can't change the integration response, I have to do it through the api gateway mapping.
It should be JSON-like:
#set($inputRoot = $input.path("$.output"))
{
"output": "$inputRoot"
}

Boto3 athena query without saving data to s3

I am trying to use boto3 to run a set of queries and don't want to save the data to s3. Instead I just want to get the results and want to work with those results. I am trying to do the following
import boto3
client = boto3.client('athena')
response = client.start_query_execution(
QueryString='''SELECT * FROM mytable limit 10''',
QueryExecutionContext={
'Database': 'my_db'
}.
ResultConfiguration={
'OutputLocation': 's3://outputpath',
}
)
print(response)
But here I don't want to give ResultConfiguration because I don't want to write the results anywhere. But If I remove the ResultConfiguration parameter I get the following error
botocore.exceptions.ParamValidationError: Parameter validation failed:
Missing required parameter in input: "ResultConfiguration"
So it seems like giving s3 output location for writing is mendatory. So what could the way to avoid this and get the results only in response?
The StartQueryExecution action indeed requires a S3 output location. The ResultConfiguration parameter is mandatory.
The alternative way to query Athena is using JDBC or ODBC drivers. You should probably use this method if you don't want to store results in S3.
You will have to specify an S3 temp bucket location whenever running the 'start_query_execution' command. However, you can get a result set (a dict) by running the 'get_query_results' method using the query id.
The response (dict) will look like this:
{
'UpdateCount': 123,
'ResultSet': {
'Rows': [
{
'Data': [
{
'VarCharValue': 'string'
},
]
},
],
'ResultSetMetadata': {
'ColumnInfo': [
{
'CatalogName': 'string',
'SchemaName': 'string',
'TableName': 'string',
'Name': 'string',
'Label': 'string',
'Type': 'string',
'Precision': 123,
'Scale': 123,
'Nullable': 'NOT_NULL'|'NULLABLE'|'UNKNOWN',
'CaseSensitive': True|False
},
]
}
},
'NextToken': 'string'
}
For more information, see boto3 client doc: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/athena.html#Athena.Client.get_query_results
You can then delete all files in the S3 temp bucket you've specified.
You still need to provide s3 as temporary location for Athena to save the data although you want to process the data using python. But you can page through the data as tuple using Pagination API. please refer to the example here. Hope that helps

AWS: Transforming data from DynamoDB before it's sent to Cloudsearch

I'm trying to set up AWS' Cloudsearch with a DynamoDB table. My data structure is something like this:
{
"name": "John Smith",
"phone": "0123 456 789"
"business": {
"name": "Johnny's Cool Co",
"id": "12345",
"type": "contractor",
"suburb": "Sydney"
},
"profession": {
"name": "Plumber",
"id": "20"
},
"email": "johnsmith#gmail.com",
"id": "354684354-4b32-53e3-8949846-211384",
}
Importing this data from DynamoDB -> Cloudsearch is a breeze, however I want to be able to index on some of these nested object parameters (like business.name, profession.name etc).
Cloudsearch is pulling in some of the nested objects like suburb, but it seems like it's impossible for it to differentiate between the name in the root of the object and the name within the business and profession objects.
Questions:
How do I make these nested parameters searchable? Can I index on business.name or something?
If #1 is not possible, can I somehow send my data through a transforming function before it gets to Cloudsearch? This way I could flatten all of my objects and give the fields unique names like businessName and professionName
EDIT:
My solution at the moment is to have a separate DynamoDB table which replicates our users table, but stores it in a CloudSearch-friendly format. However, I don't like this solution at all so any other ideas are totally welcome!
You can use dynamodb streams and write a function that runs in lambda to capture changes and add documents to cloudsearch, flatenning them at that point, instead of keeping an additional dynamodb table.
For example, within my lambda function I have logic that keeps the list of nested fields (within a "body" parent in this case) and I create a just flatten them with their field name, in the case of duplicate sub-field names you can append the parent name to create a new field such as "body-name" as the key.
... misc. setup ...
headers = { "Content-Type": "application/json" }
indexed_fields = ['app', 'name', 'activity'] #fields to flatten
def handler(event, context): #lambda handler called at each update
document = {} #document to be uploaded to cloudsearch
document['id'] = ... #your uid, from the dynamo update record likely
document['type'] = 'add'
all_fields = {}
#flatten/pull out info you want indexed
for record in event['Records']:
body = record['dynamodb']['NewImage']['body']['M']
for key in indexed_fields:
all_fields[key] = body[key]['S']
document['fields'] = all_fields
#post update to cloudsearch endpoint
r = requests.post(url, auth=awsauth, json=document, headers=headers)

Appsync response mapping template json key name change

What would be the right way to change json response key value in aws appsync response mapping template?
JSON that I get looks like this:
{
"tenant_id": 1,
"id": "bd8ce6a8-8532-47ec-8b7f-dcd1f1603320",
"header": "Header name",
"visible": true
}
and what I would like to pass forward is
{
"tenantId": 1,
"id": "bd8ce6a8-8532-47ec-8b7f-dcd1f1603320",
"header": "Header name",
"visible": true
}
Schema wants tenant id in form of tenantID and lambda returns it in form of tenant_id. I could change it in lambda but I would like to know how to do it in response mapping template.
You could do this via the response mapping template for the field you are resolving to in the following manner:
Consider the JSON response from your lambda to be stored in the response variable, then you can return something like this.
$#set($result = {
"tenantId": ${response.tenant_id},
"id": "${response.id}",
"header": "${response.header}",
"visible": $response.visible
})
$util.toJson($result)
Alternatively, you could also mutate your response from the lambda by setting a tenantId field, something like #set( $response.tenantId = $response.tenant_id ). Let me know if you still face an issue.
Thanks,
Shankar

Getting response from AWS Lambda function to AWS Lex bot is giving error?

I have created one AWS Lex bot and I am invoking one lambda function from that bot. When testing the lambda function I am getting proper response but at bot I am getting below error:
An error has occurred: Received invalid response from Lambda: Can not
construct instance of IntentResponse: no String-argument
constructor/factory method to deserialize from String value
('2017-06-22 10:23:55.0') at [Source: "2017-06-22 10:23:55.0"; line:
1, column: 1]
Not sure, what is wrong and where I am missing. Could anyone assist me please?
The solution to above problem is that we need to make sure response returned by lambda function, to be used at AWS lex chat bot should be in below format:
{
"sessionAttributes": {
"key1": "value1",
"key2": "value2"
...
},
"dialogAction": {
"type": "ElicitIntent, ElicitSlot, ConfirmIntent, Delegate, or Close",
Full structure based on the type field. See below for details.
}
}
By this, chat bot expectd DialogAction and corresponding elements in order to process the message i.e. IntentResponse.
Reference: http://docs.aws.amazon.com/lex/latest/dg/lambda-input-response-format.html
no String-argument constructor/factory method to deserialize from String value
You are getting this error because you must be passing string values in the response of lambda function. You have to pass a predefined json object blueprint in the response.
Because the communication between Lex and Lambda is not simple value passing like normal functions. Amazon Lex expects output from Lambda in a particular JSON format and data is sent to Lambda in a particular JSON. The examples are here: Lambda Function Input Event and Response Format.
And just copying and pasting the blueprint won't work because in some fields you have choose between some predefined values and in some fields you have to entry valid input.
For example in,
"dialogAction": {
"type": "Close",
"fulfillmentState": "Fulfilled or Failed",
"message": {
"contentType": "PlainText or SSML",
"content": "Thanks, your pizza has been ordered."
}
}
you have assign a value "Fulfilled" or "Failed" to field 'fulfillmentState'. And same goes for 'contentType'.