I'd like to autofill a "userId" field when an object of this type is created.
type FriendRequest #model
#key(fields: ["userId", "receiver"])
{
userId: ID!
receiver: ID!
}
How would this be done? Would I need an #function directive on the userId field? What would that function look like?
Found the answer. Find the velocity resolvers in amplify\backend\api<your api name>\build\resolvers and move the mutations you want to edit into amplify\backend\api<your api name>\resolvers.
To autofill a field I used this line, which is based off the line of code used to autofill the "createdAt" and "updatedAt" fields for appsync api objects:
$util.qr($context.args.input.put("your field name here", $util.defaultIfNull($ctx.args.input.your field name here, [your value here])))
In my case I used $context.identity.sub to get the user's id, which I learned from this doc.
Related
I have a multi tenant application in AWS Amplify, I'm using the Custom-attribute-based multi-tenancy described here
All models have a composite key with "company" being the unique tenant ID, and the cognito user pool has a custom attribute custom:company which links the user to the tenant data.
Example type below:
type Customer #model
#key(fields: ["company", "id"])
#auth(rules: [
{ allow: owner, ownerField: "company", identityClaim: "custom:company"},
{ allow: groups, groups: ["Member"], operations: [read] },
{ allow: groups, groups: ["Admin"]},
])
{
company: ID!
id: ID!
...
}
I want to add user groups to cognito to manage the operations that different users can perform - e.g. Admin users can perform all operations, but Member users can only perform read
The problem is the first owner auth rule will match for anyone with the matching custom:company attribute, regardless of their Group.
Is there a way to combine owner and group #auth rules - i.e. both owner and groups needs to pass to have access to an item?
For example - users of the Member group is allowed but only when their custom:company attribute matches the company of the model
Another example - anyone with a matching custom:company attribute have access to an item but Members can only read
I am planning to use AWS Amplify as a backend for a mobile application. The App consists of two User Types (UserTypeA,UserTypeB). They have some common data points and some unique one's too.
UserTypeA(id, email, firstName, lastName, profilePicture, someUniquePropertyForUserTypeA)
UserTypeB(id, email, firstName, lastName, profilePicture, someUniquePropertyForUserTypeB)
What would be a scalable approach to achieve this? I am also using AWS Amplify authentication so I can save the common data as CustomAttributes offered by Cognito, but then how would I save the uniqueProperties for the two user types. Will this approach scale?
This is a social app and is heavily reliant on other users' profile data as well (which will be queried most of the time).
Check out the patterns that are recommended by AppSync (The graphQL service that is behind Amplify when adding graphQL API). It is described in detail here: https://docs.aws.amazon.com/appsync/latest/devguide/security-authorization-use-cases.html
The main idea is to have multiple user pools defined in Cognito and then you can use the different groups in the resolvers. For example:
// This checks if the user is part of the Admin group and makes the call
#foreach($group in $context.identity.claims.get("cognito:groups"))
#if($group == "Admin")
#set($inCognitoGroup = true)
#end
#end
#if($inCognitoGroup)
{
"version" : "2017-02-28",
"operation" : "UpdateItem",
"key" : {
"id" : $util.dynamodb.toDynamoDBJson($ctx.args.id)
},
"attributeValues" : {
"owner" : $util.dynamodb.toDynamoDBJson($context.identity.username)
#foreach( $entry in $context.arguments.entrySet() )
,"${entry.key}" : $util.dynamodb.toDynamoDBJson($entry.value)
#end
}
}
#else
$utils.unauthorized()
#end
or using the #directives on the graphQL schema, such as:
type Query {
posts:[Post!]!
#aws_auth(cognito_groups: ["Bloggers", "Readers"])
}
type Mutation {
addPost(id:ID!, title:String!):Post!
#aws_auth(cognito_groups: ["Bloggers"])
}
...
This is a Database design problem. To solve this, you can try creating a relation that has the common attributes in it, that is, User with attributes, (ID, email, firstName, lastName, profilePicture, someUniquePropertyForUserTypeA).
After that, create sub-classed based relations, that is UserTypeA, and UserTypeB.
These relations will have a unique ID, and have a foreign key relation with the parent (User). How? The first major relation would be 'User'. The 2 sub classed relations would be 'UserTypeA', and 'UserTypeB'.
The 'User' has an attribute 'ID'.
So the two sub classes have an attribute, 'User_ID', which is a foregin relation to 'User'.'ID'.
Now just autogen another ID column for UserTypeA and UserTypeB.
This way, you have a central table which has a unique ID for all users, and then you have a unique ID in each of the sub class relations, which together with User_ID forms a composite key.
I've currently been handling batch operations with a for loop, but obviously, this is not the best approach, especially as I'm adding an 'upload by CSV' option, which will take 1000+ putItems.
I searched around for the best ways to implement this, specifically this link:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
However, even after following those steps mentioned I'm not able to achieve a batch operation. Below is my code for a 'batch delete' operation.
Here is my schema.graphql file:
type Client #model #auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
company: String
phone: String
email: String
}
type Mutation {
batchDelete(ids: [ID]): [Client]
}
I then create two new files. One request mapping template and one response mapping template.
#set($clientsdata = [])
#foreach($item in ${ctx.args.clients})
$util.qr($clientsdata.delete($util.dynamodb.toMapValues($item)))
#end
{
"version" : "2018-05-29",
"operation" : "BatchDeleteItem",
"tables" : {
"Clients": $utils.toJson($clientsdata)
}
}
and then as per the tutorial a "simple pass through" response mapping template:
$util.toJson($ctx.result.data.Posts)
However now when I run the batchdelete command, I keep getting nothing returned.
Would really appreciate guidance on this!
When it comes to performing DynamoDB batch operations in tandem with Amplify, note that the table name specified in the schema is actually different per environment, i.e. your "Client" table wouldn't be recognized as "Clients" as you have stated it in the request mapping template, but rather the name it is given on Amplify push, per environment.
E.g. Client-<some alphanumeric number>-envName
Add the full name of the table to your request and response mapping templates.
Also your foreach statement should read:
#foreach($item in ${ctx.args.clientsdata}) wherein you iterate through each of the items in the array that is passed as the argument to the context object.
Hope this helps.
I need to query after my users favorite posts. With this query they should be able to see the post with his information like title and likes. Considering that the value of some attributes like total likes can change over time, I can't add these values into the favorite dataset. If I would do that, I had to update every favorite dataset when someone likes a post to keep them up to date.This lead me to the decision that I don't add these informations (for example total likes) into a favorite entry and just attach an additional resolver for the post attribute. It could look like this:
type Post {
pid: ID!
title: String!
content: String!
likes: Int // <-- An attribute which I need to show while displaying favorites
}
type Favorite{
fid: ID!
pid: String!
...
post: Post! // <-- This has an own resolver
}
The attribute Post has his own additional resolver attached through AWS AppSync which could look like this
{
"version": "2018-05-29",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.source.pid),
}
}
If a user would query his own favorites, would this cost twice of the read operation ? Since it needs two GetItem Operation for each in the result
Example:
Query favorites with limit 10 would cost 20 read ops (10 for plain Favorite data (fid and pid) and 10 for post data through the additional resolver.
Or does DynamoDB bundle it up into one read operation in the background ? Furthermore is this an acceptable solution (or a normal approach) for this problem ? Are you using this technique ? Or would this get very expensive while scaling ?
EDIT: Im using only one table, so every dataset is in it.
A quick question about the JSON API response key "type" matching up with an Ember model name.
If I have a model, say "models/photo.js" and I have a route like "/photos", my JSON API response looks like this
{
data: [{
id: "298486374",
type: "photos",
attributes: {
name: "photo_name_1.png",
description: "A photo!"
}
},{
id: "298434523",
type: "photos",
attributes: {
name: "photo_name_2.png",
description: "Another photo!"
}
}]
}
I'm under the assumption that my model name should be singular but this error pops up
Assertion Failed: You tried to push data with a type 'photos' but no model could be found with that name
This is, of course, because my model is named "photo"
Now in the JSON API spec there is a note that reads "This spec is agnostic about inflection rules, so the value of type can be either plural or singular. However, the same value should be used consistently throughout an implementation."
So,
tl;dr Is the "Ember way" of doing things to have both the model names and the JSON API response key "type" both be singular? or does it not matter as long as they match?
JSON API serializer expects plural type. Payload example from guides.
Since modelNameFromPayloadKey function singularizes key, it works with singular type:
// as is
modelNameFromPayloadKey: function(key) {
return singularize(normalizeModelName(key));
}
but inverse operation payloadKeyFromModelName pluralizes model name and should be changed, if you use singular type in your backend:
// as is
payloadKeyFromModelName: function(modelName) {
return pluralize(modelName);
}
It is important that the internal Ember Data JSON API format differs a bit from the one used by JSONAPISerializer. Store.push expects singular type, JSON API serializer expects plural.
From discussion:
"...ED uses camelCased attributes and singular types internally, regardless of what adapter/serializer you're using.
When you're using the JSON API adapter/serializer we want users to be able to use the examples available on jsonapi.org and have it just work. Most users never have to care about the internal format since the serializer handles the work for them.
This is documented in the guides, http://guides.emberjs.com/v2.0.0/models/pushing-records-into-the-store/
..."
Depending on your use case, you might try pushPayload instead of push. As the documentation suggests, it does some normalization; and in my case it covered the "plural vs. singular" problem.