AWS API Gateway store JSON to DynamoDB - amazon-web-services

I have an issue trying to use API Gateway as a proxy to DynamoDB.
Basically it works great if I know the structure of the data I want to store but I cannot manage to make it dynamic regardless of the payload structure.
There are many websites explaining how to use API Gateway as a proxy to DynamoDB.
None that I found explains how to store a JSON object though.
Basically I send this JSON to my API endpoint:
{
"entryId":"abc",
"data":{
"key1":"123",
"key2":123
}
}
If I map using the following template, the data gets put in my database properly
{
"TableName": "Events",
"Item": {
"entryId": {
"S": "abc"
},
"data": {
"M": {
"key1": {
"S": "123"
},
"key2": {
"N": "123"
}
}
}
}
}
However, I don't know the structure of "data" hence why I want the mapping to be dynamic, or even better, I would like to avoid any mapping at all.
I managed to make it dynamic but all my entries are of type String now:
"data": { "M" : {
#foreach($key in $input.path('$.data').keySet())
"$key" : {"S": "$input.path('$.data').get($key)"}#if($foreach.hasNext),#end
#end }
}
Is it possible to get the type dynamically?
I am not quite sure how API Gateway mapping works yet.
Thank you for you help.
Seb

You aren't going to avoid some sort of mapping when inserting into Dynamodb. I would recommend using a Lambda function instead of a service proxy to give you more control and flexibility in mapping the data to your Dynamodb schema.

You can enable CloudWatch log to verify the payload after transformation is expected. You are also able to use the test invoke feature from AWS API Gateway console to find out how your mapping works.
Here is the blog for using Amazon API Gateway as a proxy for DynamoDB. https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-as-a-proxy-for-dynamodb/

Related

Sharing API gateway endpoint URL across different stacks in CDK

I have following AWS CDK backed solution:
Static S3 based webpage which communicates with
API Gateway which then sends data to
AWS lambda.
The problem is that S3 page needs to be aware of API gateway endpoint URL.
Obviously this is not achievable within the same CDK stack. So I have defined two stacks:
Backend (API gateway + lambda)
Frontend (S3 based static webpage)
They are linked as dependant in CDK code:
const app = new cdk.App();
const backStack = new BackendStack(app, 'Stack-back', {...});
new FrontendStack(app, 'Stack-front', {...}).addDependency(backStack, "API URL from backend is needed");
I try to share URL as follows.
Code from backend stack definition:
const api = new apiGW.RestApi(this, 'MyAPI', {
restApiName: 'My API',
description: 'This service provides interface towards web app',
defaultCorsPreflightOptions: {
allowOrigins: apiGW.Cors.ALL_ORIGINS,
}
});
api.root.addMethod("POST", lambdaIntegration);
new CfnOutput(this, 'ApiUrlRef', {
value: api.url,
description: 'API Gateway URL',
exportName: 'ApiUrl',
});
Code from frontend stack definition:
const apiUrl = Fn.importValue('ApiUrl');
Unfortunately, instead of URL I get token (${Token[TOKEN.256]}). At the same time, I see URL is resolved in CDK generated files:
./cdk.out/Stack-back.template.json:
"ApiUrlRef": {
"Description": "API Gateway URL",
"Value": {
"Fn::Join": [
"",
[
"https://",
{
"Ref": "MyAPI7DAA778AA"
},
".execute-api.us-west-1.",
{
"Ref": "AWS::URLSuffix"
},
"/",
{
"Ref": "MyAPIDeploymentStageprodA7777A7A"
},
"/"
]
]
},
"Export": {
"Name": "ApiUrl"
}
}
},
What I'm doing wrong?
UPD:
After advice of fedonev to pass data as props, situation did not changed much. Now url looks like that:
"https://${Token[TOKEN.225]}.execute-api.us-west-1.${Token[AWS.URLSuffix.3]}/${Token[TOKEN.244]}/"
I think important part I missed (which was also pointed by
Milan Gatyas) is how I create HTML with URL of gateway.
In my frontend-stack.ts, I use template file. After template is filled, I store it in S3:
const filledTemplatePath: string = path.join(processedWebFileDir,'index.html');
const webTemplate: string = fs.readFileSync(filledTemplatePath, 'utf8')
const Handlebars = require("handlebars")
let template = Handlebars.compile(webTemplate)
const adjustedHtml: string = template({ apiGwEndpoint: apiUrl.toString() })
fs.writeFileSync(filledTemplatePath, adjustedHtml)
// bucket
const bucket: S3.Bucket = new S3.Bucket(this, "WebsiteBucket",
{
bucketName: 'frontend',
websiteIndexDocument: 'index.html',
websiteErrorDocument: 'error.html',
publicReadAccess: true,
})
new S3Deploy.BucketDeployment(this, 'DeployWebsite', {
sources: [S3Deploy.Source.asset(processedWebFileDir)],
destinationBucket: bucket,
});
(I'm new to TS and web, please don't judge much :) )
Am I correct that S3 is populated on synth, deploy does not change anything and this is why I get tokens in html?
Will be grateful for a link or explanation so that I could understand the process better, there are so much new information to me that some parts are still quite foggy.
As #fedonev mentioned, the tokens are just placeholder values in the TypeScript application. CDK app replaces tokens with intrinsic functions when the CloudFormation template is produced.
However, your use case is different. You try to know the information inside the CDK app which is available only at synthesis time, and you can't use the intrinsic function to resolve the URL while being in CDK app to write to file.
If possible you can utilize the custom domain for the API Gateway. Then you can work with beforehand known custom domain in your static file and assign the custom domain to the API Gateway in your CDK App.
[Edit: rewrote the answer to reflect updates to the OP]
Am I correct that S3 is populated on synth, deploy does not change anything and this is why I get tokens in html?
Yes. The API URL will resolve only at deploy-time. You are trying to consume it at synth-time when you write to the template file. At synth-time, CDK represents not-yet-available values as Tokens like ${Token[TOKEN.256]}, the CDK's clever way of handling such deferred values.
What I'm doing wrong?
You need to defer the consumption of API URL until its value is resolved (= until the API is deployed). In most cases, passing constructs as props between stacks is the right approach. But not in your case: you want to inject the URL into the template file. As usual with AWS, you have many options:
Split the stacks into separate apps, deployed separately. Deploy BackendStack. Hardcode the url into FrontendStack. Quick and dirty.
Instead of S3, use Amplify front-end hosting, which can expose the URL to your template as an environment variable. Beginner friendly, has CDK support.
Add a CustomResource construct, which would be backed by a Lambda that writes the URL to the template file as part of the deploy lifecycle. This solution is elegant but not newbie-friendly.
Use a Pipeline to inject the URL variable as a build step during deploy. Another advanced approach.

AppSync Dynamodb Resolvers

I am trying to learn how to use AppSync and its DynamoDB integrations.
I have successfully created an AppSync GraphQL API and linked a resolver to a getter on the primary key and thought I understood what is happening. However, I can not get a putItem resolver to work at all and am struggling to find a useful way to debug the logic.
There is a cdk repository here which will deploy the app. Lines 133-145 have a hand written schema which I thought should work however that receives the error
One or more parameter values were invalid: Type mismatch for key food_name expected: S actual: NULL (Service: DynamoDb, Status Code: 400
I also have attempted to wrap the expressions in quotes but receive errors.
Where should I go from here?
The example data creates a table with keys
food_name
scientific_name
group
sub_group
with food_name as the primary key.
https://github.com/AG-Labs/AppSyncTask
Today I have attempted to reimplement the list resolver as
{
"version" : "2017-02-28",
"operation" : "Scan",
## Add 'limit' and 'nextToken' arguments to this field in your schema to implement pagination. **
"limit": $util.defaultIfNull(${ctx.args.limit}, 20),
"nextToken": $util.toJson($util.defaultIfNullOrBlank($ctx.args.nextToken, null))
}
with a response mapping of
$util.toJson($ctx.result.items)
In cloud watch I can see a list of results under log type ResponseMapping (albeit not correctly filtered but i'll ignore that for now) but these do not get returned to the querier. That result is simply
{
"data": {
"listGenericFoods": {
"items": null
}
}
}
I don't understand where this is going wrong.
The problem was that the resolvers were nested.
Writing a handwritten schema fixed the issue but resulted in a poorer API. Going back a few steps and will implement from the ground up slowly adding more resolvers.
The CloudWatch Logs once turned on helped somewhat but still required a lot of changing the resolvers ever so slightly and retrying.

How to pass query parameters in API gateway with S3 json file backend

I am new to AWS and I have followed this tutorial : https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html, I am now able to read from the TEST console my AWS object stored on s3 which is the following (it is .json file):
[
{
"important": "yes",
"name": "john",
"type": "male"
},
{
"important": "yes",
"name": "sarah",
"type": "female"
},
{
"important": "no",
"name": "maxim",
"type": "male"
}
]
Now, what I am trying to achieve is pass query parameters. I have added type in the Method Request and added a URL Query String Parameter named type with method.request.querystring.type mapping in the Integration Request.
When I want to test, typing type=male is not taken into account, I still get the 3 elements instead of the 2 male elements.
Any reasons you think this is happening ?
For information, the Resources is the following (and I am using AWS Service integration type to create the GET method as explained in the AWS tutorial)
/
/{folder}
/{item}
GET
In case anyone is interested by the answer, I have been able to solve my problem.
The full detailed solution requires a tutorial but here are the main steps. The difficulty lies in the many moving parts so it is important to test each of them independently to make progress (quite basic you will tell me).
Make sure your SQL query to your s3 DB is correct, for this you can go in your s3 bucket, click on your file and select "query with s3 select" from the action.
Make sure that your lambda function works, so check that you build and pass the correct SQL query from the test event
Setup the API query strings in the Method Request panel and setup the Mapping Template in the Integration Request panel (for me it looked like this "TypeL1":"$input.params('typeL1')") using the json content type
Good luck !

Invalid patch path /requestTemplates/ for AWS API Gateway Body Mapping

I am following this tutorial to set up API to dynamoDB through AWS API Gateway.
According to the instruction, was to body map to this code
{
"TableName": "favorite_movies",
"Key": {
"name": {
"S": "$input.params('name')"
}
}
}
However, I received this error ( Invalid patch path /requestTemplates/)
May anyone help me, please.
Thanks in Advance.
Content type needs to filled as "application/json" which is corresponding to the model specified.
Ideally, specifying model name should be enough as that has the content type as well.
Had the same error. Try reloading it and it should work then.

Serverless framework for AWS : Adding initial data into Dynamodb table

Currently I am using Serverless framework for deploying my applications on AWS.
https://serverless.com/
Using the serverless.yml file, we create the DynamoDB tables which are required for the application. These tables are accessed from the lambda functions.
When the application is deployed, I want few of these tables to be loaded with the initial set of data.
Is this possible?
Can you provide me some pointers for inserting this initial data?
Is this possible with AWS SAM?
Don't know if there's a specific way to do this in serverless, however, just add a call to the AWS CLI like this to your build pipeline:
aws dynamodb batch-write-item --request-items file://initialdata.json
Where initialdata.json looks something like this:
{
"Forum": [
{
"PutRequest": {
"Item": {
"Name": {"S":"Amazon DynamoDB"},
"Category": {"S":"Amazon Web Services"},
"Threads": {"N":"2"},
"Messages": {"N":"4"},
"Views": {"N":"1000"}
}
}
},
{
"PutRequest": {
"Item": {
"Name": {"S":"Amazon S3"},
"Category": {"S":"Amazon Web Services"}
}
}
}
]
}
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SampleData.LoadData.html
A more Serverless Framework option is to use a tool like the serverless-plugin-scripts plugin that allows you to add your own CLI commands to the deploy process by default:
https://github.com/mvila/serverless-plugin-scripts