AppSync S3Object retrieval - amazon-web-services

My files are currently being uploaded to an s3 bucket according to the tutorials provided.
I have a Post type with a file field pointing to an S3Object. S3Object has the values of bucket, key, and region.
I want to allow my users to download their uploaded files, but I cannot access Post > file through a query. This means I cannot get the download URL.
Right now, DynamoDB stores the following for file upon upload (I've changed the values here):
{"s3":{"key":"id.pdf","bucket":"my-bucket","region":"my-region"}}
My resolver for Post > file looks like this:
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.source.id),
}
}
Response template:
$util.dynamodb.fromS3ObjectJson($ctx.result.file)
When I run the query, I get the following error:
Error: GraphQL error: Unable to convert {bucket=my-bucket, region=my-region, key=id.pdf} to class java.lang.Object.

I believe you need to wrap $util.dynamodb.fromS3ObjectJson($ctx.result.file) in a call to $util.toJson(). Can you kindly change the response mapping template to $util.toJson($util.dynamodb.fromS3ObjectJson($ctx.result.file)) and see if that works?
As a side-note, I think you can achieve the desired effect without making a second call to DynamoDB from the Post.file resolver. Create a "None" datasource and change the Post.file resolver to use it. You can provide a barebones request mapping template such as
{
"version": "2017-02-28",
"payload": {}
}
and can then change your response mapping template to use the source instead of the result.
$util.toJson($util.dynamodb.fromS3ObjectJson($ctx.source.file))
Since the post will already have been fetched by the time the Post.file field is being resolved the information will already be available in the source. If you needed to fetch the S3Link from a different table than the Post objects then you would need a second DynamoDB call.
Hope this helps and let me know if the call to $util.toJson fixes the issue.
Thanks

Related

AppSync wrong id for schema in DynamoDB

I am using a graphql API with AppSync that receives post requests from a lambda function that is triggered by AWS IoT with sensor data in the following JSON format:
{
"scoredata": {
"id": "240",
"distance": 124,
"timestamp": "09:21:11",
"Date": "04/16/2022"
}
}
The lambda function uses this JSON object to perform a post request on the graphql API, and AppSync puts this data in DynamoDB to be stored. My issue is that whenever I parse the JSON object within my lambda function to retrieve the id value, the id value does not match with the id value stored in DynamoDB; appsync is seemingly automatically generating an id.
Here is a screenshot of the request made to the graphql api from cloudwatch:
Here is what DynamoDB is storing:
I would like to know why the id in DynamoDB is shown as 964a3cb2-1d3d-4f1e-a94a-9e4640372963" when the post request id value is "240" and if there is anything I can do to fix this.
I can’t tell for certain but i’m guessing that dynamo db schema is autogenerating the id field on insert and using a uuid as the id type. An alternative would be to introduce a new property like score_id to store this extraneous id.
If you are using amplify most likely the request mapping templates you are generating automatically identify the "id" field as a unique identifier to be generated at runtime.
I recommend you to take a look at your VTL request template, you will most likely find something like this:
$util.qr($context.args.input.put("id", $util.defaultIfNull($ctx.args.input.id, $util.autoId())))
Surely the self-generated id comes from $util.autoId()
Probably some older version of Amplify could omit the verification $util.defaultIfNull($ctx.args.input.id,... and always overwrite the id by self-generating it.

GCP Data Catalog - Reordering Fields on already created Tag Template

I noticed that I can't reorder the Fields within a Tag Template I have already created on GCP Data Catalog.
I also followed what was said on this discussion on StackOverflow but I couldn't successfully update the "order" field, as it is stated on the documentation of the Data Catalog API that only these properties could be updated through PATCH method:
displayName
type.enum_type
isRequired
I have already been sucessful in cahnging these properties, but when I try to change the "order", it results in error, as expected:
{
"error": {
"code": 400,
"message": "Unsupported field mask path: \"order\", supported field masks are:\ndisplay_name\ntype.enum_type\nis_required",
"status": "INVALID_ARGUMENT"
}
}
Is there any workaround to this? The referenced discussion says that they managed to do it though the PATCH method on the Data Catalog API but I can't manage to do it.
Thanks in advance.

AppSync Dynamodb Resolvers

I am trying to learn how to use AppSync and its DynamoDB integrations.
I have successfully created an AppSync GraphQL API and linked a resolver to a getter on the primary key and thought I understood what is happening. However, I can not get a putItem resolver to work at all and am struggling to find a useful way to debug the logic.
There is a cdk repository here which will deploy the app. Lines 133-145 have a hand written schema which I thought should work however that receives the error
One or more parameter values were invalid: Type mismatch for key food_name expected: S actual: NULL (Service: DynamoDb, Status Code: 400
I also have attempted to wrap the expressions in quotes but receive errors.
Where should I go from here?
The example data creates a table with keys
food_name
scientific_name
group
sub_group
with food_name as the primary key.
https://github.com/AG-Labs/AppSyncTask
Today I have attempted to reimplement the list resolver as
{
"version" : "2017-02-28",
"operation" : "Scan",
## Add 'limit' and 'nextToken' arguments to this field in your schema to implement pagination. **
"limit": $util.defaultIfNull(${ctx.args.limit}, 20),
"nextToken": $util.toJson($util.defaultIfNullOrBlank($ctx.args.nextToken, null))
}
with a response mapping of
$util.toJson($ctx.result.items)
In cloud watch I can see a list of results under log type ResponseMapping (albeit not correctly filtered but i'll ignore that for now) but these do not get returned to the querier. That result is simply
{
"data": {
"listGenericFoods": {
"items": null
}
}
}
I don't understand where this is going wrong.
The problem was that the resolvers were nested.
Writing a handwritten schema fixed the issue but resulted in a poorer API. Going back a few steps and will implement from the ground up slowly adding more resolvers.
The CloudWatch Logs once turned on helped somewhat but still required a lot of changing the resolvers ever so slightly and retrying.

How to pass query parameters in API gateway with S3 json file backend

I am new to AWS and I have followed this tutorial : https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html, I am now able to read from the TEST console my AWS object stored on s3 which is the following (it is .json file):
[
{
"important": "yes",
"name": "john",
"type": "male"
},
{
"important": "yes",
"name": "sarah",
"type": "female"
},
{
"important": "no",
"name": "maxim",
"type": "male"
}
]
Now, what I am trying to achieve is pass query parameters. I have added type in the Method Request and added a URL Query String Parameter named type with method.request.querystring.type mapping in the Integration Request.
When I want to test, typing type=male is not taken into account, I still get the 3 elements instead of the 2 male elements.
Any reasons you think this is happening ?
For information, the Resources is the following (and I am using AWS Service integration type to create the GET method as explained in the AWS tutorial)
/
/{folder}
/{item}
GET
In case anyone is interested by the answer, I have been able to solve my problem.
The full detailed solution requires a tutorial but here are the main steps. The difficulty lies in the many moving parts so it is important to test each of them independently to make progress (quite basic you will tell me).
Make sure your SQL query to your s3 DB is correct, for this you can go in your s3 bucket, click on your file and select "query with s3 select" from the action.
Make sure that your lambda function works, so check that you build and pass the correct SQL query from the test event
Setup the API query strings in the Method Request panel and setup the Mapping Template in the Integration Request panel (for me it looked like this "TypeL1":"$input.params('typeL1')") using the json content type
Good luck !

How to download publicly available pdf and png files from S3 with AppSync

I'm fairly new to GraphQL and AWS AppSync, and I'm running into an issue downloading files (PDFs and PNGs) from a public S3 bucket via AWS AppSync. I've looked at dozens of tutorials and dug through a mountain of documentation, and I'm just not certain what's going on at this point. This may be nothing more than a misunderstanding about the nature of GraphQL or AppSync functionality, but I'm completely stumped.
For reference, I've heavily sourced from other posts like How to upload file to AWS S3 using AWS AppSync (specifically, from the suggestions by the accepted answer author), but none of the solutions (or the variations I've attempted) are working.
The Facts
S3 bucket is publicly accessible – i.e., included folders and files are not tied to individual users with Cognito credentials
Files are uploaded to S3 outside of AppSync (so there's no GraphQL mutation); it's a manual file upload
Schema works for all other queries and mutations
We are using AWS Cognito to authenticate users and queries
Abridged Schema and DynamoDB Items
Here's an abridged version of the relevant GraphQL schema types:
type MetroCard implements TripCard {
id: ID!
cardType: String!
resIds: String!
data: MetroData!
file: S3Object
}
type MetroData implements DataType {
sourceURL: String!
sourceFileURL: String
metroName: String!
}
type S3Object {
bucket: String!
region: String!
key: String!
}
Metadata about the files is stored in DynamoDB and looks something like this:
{
"data": {
"metroName": "São Paulo Metro",
"sourceFileURL": "http://www.metro.sp.gov.br/pdf/mapa-da-rede-metro.pdf",
"sourceURL": "http://www.metro.sp.gov.br/en/your-trip/index.aspx"
},
"file": {
"bucket": "test-images",
"key": "some_folder/sub_folder/bra-sbgr-metro-map.pdf",
"region": "us-east-1"
},
"id": "info/en/bra/sbgr/metro"
}
VTL Request/Response Resolvers
For our getMetroCard(id: ID!): MetroCard query, the mapping templates are pretty vanilla. The request template is a standard query on a DynamoDB table. The response template is a basic $util.toJson($ctx.result).
For the field-level resolver on MetroCard.file, we've attached a local data source with an empty {} payload for the request and the following for the response (see referenced link for reasoning):
$util.toJson($util.dynamodb.fromS3ObjectJson($context.source.file)) // we've played with this bit in a couple of ways, including simply returning $context.result but no change
Results
All of the query fields resolve appropriately; however, the file field inevitably always returns null no matter what the field-level resolver is mapped to. Interestingly, I've noticed in the CloudWatch logs the value of context.result does change from null to {} with the above mapping template.
Questions
Given the above, I have several questions:
Does AppSync file download require files to be uploaded to S3 with user credentials through a mutation with a complex object handler in order to make them retrievable?
What should a successful response look like in the AppSync console return – i.e., I have no client implementation (like a React Native app) to test successful file downloads? More directly, is it actually retrieving the files, and I just don't know it? (Note: I actually have tested it briefly with a React Native client, but nothing rendered so I've just been using the AppSync console returns as direction ever since.)
Does it make more sense to remove the file download process entirely from our schema? (I'm assuming the answers I need reveal that AppSync just wasn't built for file transfer like this, and so we'll need to rethink our approach.)
Update
I've started playing around with the data source for MetroCard.file per the suggestion of this recent post https://stackoverflow.com/a/52142178/5989171. If I make the data source the same as the database storing the file metadata, I now get the error mentioned in the ref but his solution doesn't seem to be working for me. Specifically, I now get the following:
"message": "Value for field '$[operation]' not found."
Our Solution
For our use case, we've decided to go ahead and use the AWS Amplify Storage module as suggested here: https://twitter.com/presbaw/status/1040800650790002689. Despite that, I'm keeping this question open and unanswered, because I'm just genuinely curious about what I'm not understanding here, and I have a feeling I'm not the only one!
$util.toJson($util.dynamodb.fromS3ObjectJson($context.source.file))
You can only use this if your DynamoDB save file field as format: {"s3":{"key":"file.jpg","bucket":"bucket_name/folder","region":"us-east-1"}}