Invalid patch path /requestTemplates/ for AWS API Gateway Body Mapping - amazon-web-services

I am following this tutorial to set up API to dynamoDB through AWS API Gateway.
According to the instruction, was to body map to this code
{
"TableName": "favorite_movies",
"Key": {
"name": {
"S": "$input.params('name')"
}
}
}
However, I received this error ( Invalid patch path /requestTemplates/)
May anyone help me, please.
Thanks in Advance.

Content type needs to filled as "application/json" which is corresponding to the model specified.
Ideally, specifying model name should be enough as that has the content type as well.

Had the same error. Try reloading it and it should work then.

Related

GCP Data Catalog - Reordering Fields on already created Tag Template

I noticed that I can't reorder the Fields within a Tag Template I have already created on GCP Data Catalog.
I also followed what was said on this discussion on StackOverflow but I couldn't successfully update the "order" field, as it is stated on the documentation of the Data Catalog API that only these properties could be updated through PATCH method:
displayName
type.enum_type
isRequired
I have already been sucessful in cahnging these properties, but when I try to change the "order", it results in error, as expected:
{
"error": {
"code": 400,
"message": "Unsupported field mask path: \"order\", supported field masks are:\ndisplay_name\ntype.enum_type\nis_required",
"status": "INVALID_ARGUMENT"
}
}
Is there any workaround to this? The referenced discussion says that they managed to do it though the PATCH method on the Data Catalog API but I can't manage to do it.
Thanks in advance.

How to pass query parameters in API gateway with S3 json file backend

I am new to AWS and I have followed this tutorial : https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html, I am now able to read from the TEST console my AWS object stored on s3 which is the following (it is .json file):
[
{
"important": "yes",
"name": "john",
"type": "male"
},
{
"important": "yes",
"name": "sarah",
"type": "female"
},
{
"important": "no",
"name": "maxim",
"type": "male"
}
]
Now, what I am trying to achieve is pass query parameters. I have added type in the Method Request and added a URL Query String Parameter named type with method.request.querystring.type mapping in the Integration Request.
When I want to test, typing type=male is not taken into account, I still get the 3 elements instead of the 2 male elements.
Any reasons you think this is happening ?
For information, the Resources is the following (and I am using AWS Service integration type to create the GET method as explained in the AWS tutorial)
/
/{folder}
/{item}
GET
In case anyone is interested by the answer, I have been able to solve my problem.
The full detailed solution requires a tutorial but here are the main steps. The difficulty lies in the many moving parts so it is important to test each of them independently to make progress (quite basic you will tell me).
Make sure your SQL query to your s3 DB is correct, for this you can go in your s3 bucket, click on your file and select "query with s3 select" from the action.
Make sure that your lambda function works, so check that you build and pass the correct SQL query from the test event
Setup the API query strings in the Method Request panel and setup the Mapping Template in the Integration Request panel (for me it looked like this "TypeL1":"$input.params('typeL1')") using the json content type
Good luck !

Create an AWS Resource Group with Terraform

I am currently getting into Terraform and I am trying to structure the different resources that I am deploying by using tags and resource groups.
https://docs.aws.amazon.com/cli/latest/reference/resource-groups/index.html
I can easily add tags with Terraform and I can create the resource-group via aws cli but I really want to be able to do both with Terraform if possible.
The official Terraform docs currently seem to not support an aws_resource_group resource(I was able to find aws_inspector_resource_group and aws_iam_resource_group, which are different types of grouping resources) but I was wondering if anyone was able to achieve it via some kind of a workaround.
I would really appreciate any feedback on the matter.
Thanks in advance!
This has been released in aws provider 1.55.0: https://www.terraform.io/docs/providers/aws/r/resourcegroups_group.html
For anyone looking for a code example, try this:
resource "aws_resourcegroups_group" "code-resource" {
name = "code-resource"
resource_query {
query = <<JSON
{
"ResourceTypeFilters": [
"AWS::EC2::Instance"
],
"TagFilters": [
{
"Key": "Stage",
"Values": ["dev"]
}
]
}
JSON
}
}
Please update it to your liking and needs. also be sure to checkout the source documentation:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/resourcegroups_group

AppSync S3Object retrieval

My files are currently being uploaded to an s3 bucket according to the tutorials provided.
I have a Post type with a file field pointing to an S3Object. S3Object has the values of bucket, key, and region.
I want to allow my users to download their uploaded files, but I cannot access Post > file through a query. This means I cannot get the download URL.
Right now, DynamoDB stores the following for file upon upload (I've changed the values here):
{"s3":{"key":"id.pdf","bucket":"my-bucket","region":"my-region"}}
My resolver for Post > file looks like this:
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.source.id),
}
}
Response template:
$util.dynamodb.fromS3ObjectJson($ctx.result.file)
When I run the query, I get the following error:
Error: GraphQL error: Unable to convert {bucket=my-bucket, region=my-region, key=id.pdf} to class java.lang.Object.
I believe you need to wrap $util.dynamodb.fromS3ObjectJson($ctx.result.file) in a call to $util.toJson(). Can you kindly change the response mapping template to $util.toJson($util.dynamodb.fromS3ObjectJson($ctx.result.file)) and see if that works?
As a side-note, I think you can achieve the desired effect without making a second call to DynamoDB from the Post.file resolver. Create a "None" datasource and change the Post.file resolver to use it. You can provide a barebones request mapping template such as
{
"version": "2017-02-28",
"payload": {}
}
and can then change your response mapping template to use the source instead of the result.
$util.toJson($util.dynamodb.fromS3ObjectJson($ctx.source.file))
Since the post will already have been fetched by the time the Post.file field is being resolved the information will already be available in the source. If you needed to fetch the S3Link from a different table than the Post objects then you would need a second DynamoDB call.
Hope this helps and let me know if the call to $util.toJson fixes the issue.
Thanks

AWS API Gateway store JSON to DynamoDB

I have an issue trying to use API Gateway as a proxy to DynamoDB.
Basically it works great if I know the structure of the data I want to store but I cannot manage to make it dynamic regardless of the payload structure.
There are many websites explaining how to use API Gateway as a proxy to DynamoDB.
None that I found explains how to store a JSON object though.
Basically I send this JSON to my API endpoint:
{
"entryId":"abc",
"data":{
"key1":"123",
"key2":123
}
}
If I map using the following template, the data gets put in my database properly
{
"TableName": "Events",
"Item": {
"entryId": {
"S": "abc"
},
"data": {
"M": {
"key1": {
"S": "123"
},
"key2": {
"N": "123"
}
}
}
}
}
However, I don't know the structure of "data" hence why I want the mapping to be dynamic, or even better, I would like to avoid any mapping at all.
I managed to make it dynamic but all my entries are of type String now:
"data": { "M" : {
#foreach($key in $input.path('$.data').keySet())
"$key" : {"S": "$input.path('$.data').get($key)"}#if($foreach.hasNext),#end
#end }
}
Is it possible to get the type dynamically?
I am not quite sure how API Gateway mapping works yet.
Thank you for you help.
Seb
You aren't going to avoid some sort of mapping when inserting into Dynamodb. I would recommend using a Lambda function instead of a service proxy to give you more control and flexibility in mapping the data to your Dynamodb schema.
You can enable CloudWatch log to verify the payload after transformation is expected. You are also able to use the test invoke feature from AWS API Gateway console to find out how your mapping works.
Here is the blog for using Amazon API Gateway as a proxy for DynamoDB. https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-as-a-proxy-for-dynamodb/