AWS Appsync - add multiple items - amazon-web-services

Is there any way of connecting appsync to dynamodb with option to put multiple items at once? I'm asking about dynamodb reuqest mapping.
Let's say we have saveVotes mutation
type Mutation {
saveVotes(votes: [VoteInput]): [Vote]!
}
How should I design dynamo request template to get each vote saved as separate object in the dynamodb? Each VoteInput has ID. I'm sending 5 VoteInput, and I want to have 5 separate object, with separate ID in the dynamodb.
In the AWS docs there are examples just for single putItem which is not enough for me
https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-dynamodb.html#aws-appsync-resolver-mapping-template-reference-dynamodb-putitem

You should take a look at using Batch Resolvers at https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
As I understand it, you can construct your GraphQL queries to accept a collection of items and then implement the associated resolvers to perform batch operations in the way you describe.

So the best solution for me is to make query this way
mutation saveVotes {
vote1: saveVotes(author: "AUTHORNAME", value: 1 ) { author, value }
vote2: saveVotes(author: "AUTHORNAME", value: 3 ) { author, value }
}

Related

AppSync wrong id for schema in DynamoDB

I am using a graphql API with AppSync that receives post requests from a lambda function that is triggered by AWS IoT with sensor data in the following JSON format:
{
"scoredata": {
"id": "240",
"distance": 124,
"timestamp": "09:21:11",
"Date": "04/16/2022"
}
}
The lambda function uses this JSON object to perform a post request on the graphql API, and AppSync puts this data in DynamoDB to be stored. My issue is that whenever I parse the JSON object within my lambda function to retrieve the id value, the id value does not match with the id value stored in DynamoDB; appsync is seemingly automatically generating an id.
Here is a screenshot of the request made to the graphql api from cloudwatch:
Here is what DynamoDB is storing:
I would like to know why the id in DynamoDB is shown as 964a3cb2-1d3d-4f1e-a94a-9e4640372963" when the post request id value is "240" and if there is anything I can do to fix this.
I can’t tell for certain but i’m guessing that dynamo db schema is autogenerating the id field on insert and using a uuid as the id type. An alternative would be to introduce a new property like score_id to store this extraneous id.
If you are using amplify most likely the request mapping templates you are generating automatically identify the "id" field as a unique identifier to be generated at runtime.
I recommend you to take a look at your VTL request template, you will most likely find something like this:
$util.qr($context.args.input.put("id", $util.defaultIfNull($ctx.args.input.id, $util.autoId())))
Surely the self-generated id comes from $util.autoId()
Probably some older version of Amplify could omit the verification $util.defaultIfNull($ctx.args.input.id,... and always overwrite the id by self-generating it.

How to delete all items that satisfies certain condition in AWS amplify?

I am developing an web app using AWS amplify.
I want to delete multiple items that satisfies certain condition using a query like the following:
mutation delete {
deletePostTag(condition: {title: {eq: "Hello"}}) {
id
}
}
However, having tried to run the above query on AWS AppSync console, it complains that input field is missing, but unfortunately input only accepts id.
It seems that the resolver generated by amplify cli does not support deleting multiple items at once.
Do I have to implement a custom resolver?
You can delete multiple items in a batch. Example below and read more here.
Schema:
type Mutation {
batchDelete(ids: [ID]): [Post]
}
Query:
mutation delete {
batchDelete(ids:[1,2]){ id }
}
Not 100% sure if conditions are supported here, but hopefully you can test it. If, as I suspect, they are not supported then simply issue a query with those same conditions to retrieve matching items and then supply the resulting array of item keys to batchDelete.

Return all items or a single item from DynamoDB from AWS API Gateway endpoint

I am using AWS proxy with AWS API Gateway to interact with a DynamoDB table. I have an API resource, under which I have a GET method with the below configuration:
The API uses the Scan action as seen above to fetch all the items from the DynamoDB table. I also have the following request integration mapping template;
{
"TableName": tableName
}
Its really simple. But my problem is that I would like to add another GET method to get each item by their id, which will be supplied in the URL as a param. However, since I have already setup one GET method, I am not able to setup another to fetch only a single item. I am aware I can use mapping templates and Scan as given in the docs to conditionally fetch items if a param is given, but that would mean scanning the entire table, which is a waste each time I want to fetch a single item.
Is there any other way to do this?

Is there any method to know that whether the data save offline is already synced online?

I'm using aws appsync with react native and there is transaction happening offline and I want to know if mydata that is being transact offline is already saved in my db online.
The fetch policy I'm using is already network-only, but "network-only" policy is not working because it can still catch data if it is offline.
If you are using DynamoDB with AppSync, you can add a condition expression to your mutation resolver request mapping template. DynamoDB conditions are used to validate whether the mutation should succeed or not.
Many people use versions with DynamoDB condition checks to validate that a record hasn't already been updated, but you can add additional fields to keep track of whether the transaction has already been made.
Here is an example condition expression that you can add to your request mapping template to validate the incoming mutation:
"condition" : {
"expression" : "version = :expectedVersion",
"expressionValues" : {
":expectedVersion" : { "N" : ${context.arguments.expectedVersion} }
}
}
Here is is an overly comprehensive guide to using DynamoDB resolvers:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-resolvers.html#modifying-the-updatepost-resolver-dynamodb-updateitem

Is this an appropriate use-case for Amazon DynamoDB / NoSQL?

I'm working on a web application that uses a bunch of Amazon Web Services. I'd like to use DynamoDB for a particular part of the application but I'm not sure if it's an appropriate use-case.
When a registered user on the site performs a "job", an entry is recorded and stored for that job. The job has a bunch of details associated with it, but the most relevant thing is that each job has a unique identifier and an associated username. Usernames are unique too, but there can of course be multiple job entries for the same user, each with different job identifiers.
The only query that I need to perform on this data is: give me all the job entries (and their associated details) for username X.
I started to create a DynamoDB table but I'm not sure if it's right. My understanding is that the chosen hash key should be the key that's used for querying/indexing into the table, but it should be unique per item/row. Username is what I want to query by, but username will not be unique per item/row.
If I make the job identifier the primary hash key and the username a secondary index, will that work? Can I have duplicate values for a secondary index? But that means I will never use the primary hash key for querying/indexing into the table, which is the whole point of it, isn't it?
Is there something I'm missing, or is this just not a good fit for NoSQL.
Edit:
The accepted answer helped me find out what I was looking for as well as this question.
I'm not totally clear on what you're asking, but I'll give it a shot...
With DynamoDB, the combination of your hash key and range key must uniquely identify an item. Range key is optional; without it, hash key alone must uniquely identify an item.
You can also store a list of values (rather than just a single value) as an item's attributes. If, for example, each item represented a user, an attribute on that item could be a list of that user's job entries.
If you're concerned about hitting the size limitation of DynamoDB records, you can use S3 as backing storage for that list - essentially use the DDB item to store a reference to the S3 resource containing the complete list for a given user. This gives you flexibility to query for or store other attributes rather easily. Alternatively (as you suggested in your answer), you could put the entire user's record in S3, but you'd lose some of the flexibility and throughput of doing your querying/updating through DDB.
Perhaps a "Jobs" table would work better for you than a "User" table. Here's what I mean.
If you're worried about all of those jobs inside a user document adding up to more than the 400kb limit, why not store the jobs individually in a table like:
my_jobs_table:
{
{
Username:toby,
JobId:1234,
Status: Active,
CreationDate: 2014-10-05,
FileRef: some-reference1
},
{
Username:toby,
JobId:5678,
Status: Closed,
CreationDate: 2014-10-01,
FileRef: some-reference2
},
{
Username:bob,
JobId:1111,
Status: Closed,
CreationDate: 2014-09-01,
FileRef: some-reference3
}
}
Username is the hash and JobId is the range. You can query on the Username to get all the user's jobs.
Now that the size of each document is more limited, you could think about putting all the data for each job in the dynamo db record instead of using the FileRef and looking it up in S3. This would probably save a significant amount of latency.
Each record might then look like:
{
Username:bob,
JobId:1111,
Status: Closed,
CreationDate: 2014-09-01,
JobCategory: housework,
JobDescription: Doing the dishes,
EstimatedDifficulty: Extreme,
EstimatedDuration: 9001
}
I reckon I didn't really play with the DynamoDB console for long enough to get a good understanding before posting this question. I only just understood now that a DynamoDB table (and presumably any other NoSQL table) is really just a giant dictionary/hash data structure. So to answer my question, yes I can use DynamoDB, and each item/row would look something like this:
{
"Username": "SomeUser",
"Jobs": {
"gdjk345nj34j3nj378jh4": {
"Status": "Active",
"CreationDate": "2014-10-05",
"FileRef": "some-reference"
},
"ghj3j76k8bg3vb44h6l22": {
"Status": "Closed",
"CreationDate": "2014-09-14",
"FileRef": "another-reference"
}
}
}
But I'm not sure it's even worth using DynamoDB after all that. It might be simpler to just store a JSON file containing that content structure above in an S3 bucket, where the filename is the username.json
Edit:
For what it's worth, I just realized that DynamoDB has a 400KB size limit on items. That's a huge amount of data, relatively speaking for my use-case, but I can't take the chance so I'll have to go with S3.
It seems that username as the hash key and a unique job_id as the range, as others have already suggested would serve you well in dynamodb. Using a query you can quickly search for all records for a username.
Another option is to take advantage of local secondary indexes and sparse indexes. It seems that there is a status column but based upon what I've read you could add another column, perhaps 'not_processed': 'x', and make your local secondary index on username+not_processed. Only records which have this field are indexed and once a job is complete you delete this field. This means you can effectively table scan using an index for username where not_processed=x. Also your index will be small.
All my relational db experience seems to be getting in the way for my understanding dynamodb. Good luck!