I use AWS Cognito as our user pool, and AWS DynamoDB for our data.
I would like to have fine-grained control over DynamoDB items (rows), on a per-user basis.
I am aware of strategies using user_id or tenant_id as the primary key, but these don't seem like they would work for my application.
The items in my database are project-based - so the partition keys are the project codes, e.g. '#PR001', '#PR002', '#PR003'.
I have different groups of users (roles) with different permissions, i.e. viewers who can read-only, editors who can edit some of the data, and super-editors who can edit all of the data.
The projects that each user has access to is not simply grouped by tenancy. For example (pseudo-code):
user_1 = {
role: "viewer",
projects: ["#PR001","#PR003","#PR005"]
}
user_2 = {
role: "editor",
projects: ["#PR002","#PR003"]
}
user_3 = {
role: "super-editor",
projects: ["#PR001","#PR005"]
}
What is the simplest approach to giving users the right type of access to only the projects they are assigned to? Would it be possible to have a item in my DDB that stores the access list for projects? Would that be secure?
NB. My user pool is small at the moment so it is not a big problem if there are some manual steps involved.
As the comments mention, a external fine-grained permissions solution, like Amazon Verified Permissions when generally available may offer the most flexibility.
You also can use a feature of DynamoDB to solve this. From https://www.alexdebrie.com/posts/dynamodb-transactions/#access-control
A third use case for transactions is in an authorization or access control setting.
Let's assume:
users are in USER_TABLE_NAME with user as the PK, role as the SK, and projects is a String Set of the projects related to that role.
projects are in PROJECT_TABLE_NAME with project as PK as status is green|yellow|red
The prototype (no error checks) TypeScript code below can be called via update_project("user_2", "#PR002", "red"); to update project "#PR002" to status "red" for "user_2".
The ConditionCheck in the transaction checks the user is an "editor" and the project is in the list of their projects. Only if that succeeds does the Update execute. See Amazon DynamoDB Developer Guide Managing complex workflows with DynamoDB transactions for full details.
const update_project = async function (
user: string,
project: string,
status: "green" | "yellow" | "red"
) {
const ddbResponse = await ddbClient.send(
new TransactWriteItemsCommand({
TransactItems: [
{
ConditionCheck: {
TableName: USER_TABLE_NAME,
Key: {
user: { S: user },
role: { S: "editor" },
},
ConditionExpression: "contains(projects, :project)",
ExpressionAttributeValues: {
":project": { S: project },
},
},
},
{
Update: {
TableName: PROJECT_TABLE_NAME,
Key: {
project: { S: project },
},
UpdateExpression: "SET #S = :status",
ExpressionAttributeNames: {
"#S": "status",
},
ExpressionAttributeValues: {
":status": { S: status },
},
},
},
],
})
);
};
When called using "user_1" it will fail with:
TransactionCanceledException: Transaction cancelled, please refer cancellation reasons for specific reasons [ConditionalCheckFailed, None]
You will need to update with your specific rules, allowing viewer to GetItem and editor to only update your specific fields.
Related
Use Case: I desire to delete a DynamoDB record with a mutation as an owner of the record using AWS Amplify, GraphQL and Typescript.
Note:
All changes have successfully deployed and built in the AWS Amplify Pipeline as a full-stack build with CI/CD
I am logged in as the owner of the record.
I have used console.log to ensure the values are obtained before the await function is ran.
Front-End Query:
await API.graphql({
query: deleteImage,
variables: {
input: {
id : fileName,
employerID : Auth.user.attributes["custom:id"],
},
},
authMode: "AMAZON_COGNITO_USER_POOLS",
});
GraphQL Schema:
type Image
#model
#auth(
rules: [
{ allow: owner, operations: [create, update, delete, read] }
]
GraphQL Mutation:
export const deleteImage = /* GraphQL */ `
mutation DeleteImage(
$input: DeleteImageInput!
$condition: ModelImageConditionInput
) {
deleteImage(input: $input, condition: $condition) {
id
owner
name
contentType
createdAt
updatedAt
employerID
}
}
`;
Error Message:
"Not Authorized to access deleteImage on type Mutation"
The unauthorized error may occur misleadingly when the record in the DynamoDB table does not exist.
Check your table to see whether the record exists, as this may be the cause.
Which Category is your question related to?
DynamoDB, AppSync(GraphQL)
Amplify CLI Version
4.50.2
Provide additional details e.g. code snippets
BACKGROUND:
I'm new in AWS serverless app systems and as a frontend dev, I'm quite enjoying it thanks to auto-generated APIs, tables, connections, resolvers etc. I'm using Angular/Ionic in frontend and S3, DynamoDB, AppSync, Cognito, Amplify-cli for the backend.
WHAT I HAVE:
Here is a part of my schema. I can easily use auto-generated APIs to List/Get Feedbacks with additional filters (i.e. score: { ge: 3 }). And thanks to the #connection I can see the User's details in the listed Feedback items.
type User #model #auth(rules: [{ allow: owner }]) {
id: ID!
email: String!
name: String!
region: String!
sector: String!
companyType: String!
}
type Feedback #model #auth(rules: [{ allow: owner }]) {
id: ID!
user: User #connection
score: Int!
content: String
}
WHAT I WANT:
I want to list Feedbacks based on several fields on User type, such as user's region (i.e. user.region: { contains: 'United States' }). Now I searched for a solution quite a lot like, #2311 , and I learned that amplify codegen only creates top-level filtering. In order to use cross-table filtering, I believe I need to modify resolvers, lambda functions, queries and inputs. Which, for a beginner, it looks quite complex.
WHAT I TRIED/CONSIDERED:
I tried listing all Users and Feedbacks separately and filtering them in front-end. But then the client downloads all these unnecessary data. Also because of the pagination limit, user experience takes a hit as they see an empty list and repeatedly need to click Load More button.
Thanks to some suggestions, I also thought about duplicating the User details in Feedback table to be able to search/filter them. Then the problem is that if User updates his/her info, duplicated values will be out-of-date. Also there will be too many duplicated data, as I need this feature for other tables also.
I also heard about using ElasticSearch for this problem but someone mentioned for a simple filtering he got 30$ monthly cost, so I got cold feet.
I tried the resolver solution to add a custom filtering in it. But I found that quite complex for a beginner. Also I will need this cross-table filtering in many other tables as well, so I think would be hard to manage. If that is the best-practice, I'd appreciate it if someone can guide me through it.
QUESTIONS:
What would be the easiest/beginner-friendly solution for me to achieve this cross-table filtering? I am open to alternative solutions.
Is this cross-table filtering a bad approach for a no-SQL setup? Since I need some relationship between two tables. (I thought #connection would be enough). Should I switch to an SQL setup before it is too late?
Is it possible for Amplify to auto-generate a solution for this in the future? I feel like many people are experiencing the same issue.
Thank you in advance.
Amplify, and really DynamoDB in general, requires you to think about your access patterns ahead of time. There is a lot of really good information out there to help guide you through what this thought process can look like. Particularly, I like Nader Dabit's https://dev.to/dabit3/data-modeling-in-depth-with-graphql-aws-amplify-17-data-access-patterns-4meh
At first glance, I think I would add a new #key called byCountry to the User model, which will create a new Global Secondary Index on that property for you in DDB and will give you some new query methods as well. Check out https://docs.amplify.aws/cli/graphql-transformer/key#designing-data-models-using-key for more examples.
Once you have User.getByCountry in place, you should then be able to also bring back each user's Feedbacks.
query USAUsersWithFeedbacks {
listUsersByCountry(country: "USA") {
items {
feedbacks {
items {
content
}
nextToken
}
}
nextToken
}
}
Finally, you can use JavaScript to fetch all while the nextToken is not null. You will be able to re-use this function for each country you are interested in and you should be able to extend this example for other properties by adding additional #keys.
My former answer can still be useful for others in specific scenarios, but I found a better way to achieve nested filtering when I realized you can filter nested items in custom queries.
Schema:
type User #model {
id: ID!
email: String!
name: String!
region: String!
sector: String!
companyType: String!
feedbacks: [Feedback] #connection # <-- User has many feedbacks
}
Custom query:
query ListUserWithFeedback(
$filter: ModelUserFilterInput # <-- Filter Users by Region or any other User field
$limit: Int
$nextToken: String
$filterFeedback: ModelFeedbackFilterInput # <-- Filter inner Feedbacks by Feedback fields
$nextTokenFeedback: String
) {
listUsers(filter: $filter, limit: $limit, nextToken: $nextToken) {
items {
id
email
name
region
sector
companyType
feedbacks(filter: $filterFeedback, nextToken: $nextTokenFeedback) {
items {
content
createdAt
id
score
}
nextToken
}
createdAt
updatedAt
}
nextToken
}
}
$filter can be something like:
{ region: { contains: 'Turkey' } }
$filterFeedback can be like:
{
and: [{ content: { contains: 'hello' }, score: { ge: 4 } }]
}
This way both Users and Feedbacks can be filtered at the same time.
Ok thanks to #alex's answers I implemented the following. The idea is instead of listing Feedbacks and trying to filter them by User fields, we list Users and collect their Feedbacks from the response:
Updated schema.graphql as follows:
type User
#model
#auth(rules: [{ allow: owner }])
#key(name: "byRegion", fields: ["region"], queryField: "userByRegion") # <-- added byRegion key {
id: ID!
email: String!
name: String!
region: String!
sector: String!
companyType: String!
feedbacks: [Feedback] #connection # <-- added feedbacks connection
}
Added userFeedbacksId parameter while calling CreateFeedback. So they will appear while listing Users.
Added custom query UserByRegionWithFeedback under src/graphql/custom-queries.graphl and used amplify codegen to build it:
query UserByRegionWithFeedback(
$region: String
$sortDirection: ModelSortDirection
$filter: ModelUserFilterInput
$limit: Int
$nextToken: String # <-- nextToken for getting more Users
$nextTokenFeedback: String # <-- nextToken for getting more Feedbacks
) {
userByRegion(
region: $region
sortDirection: $sortDirection
filter: $filter
limit: $limit
nextToken: $nextToken
) {
items {
id
email
name
region
sector
companyType
feedbacks(nextToken: $nextTokenFeedback) {
items {
content
createdAt
id
score
}
nextToken
}
createdAt
updatedAt
owner
}
nextToken
}
}
Now I call this API like the following:
nextToken = {
user: null,
feedback: null
};
feedbacks: any;
async listFeedbacks() {
try {
const res = await this.api.UserByRegionWithFeedback(
'Turkey', // <-- region: filter Users by their region, I will add UI input later
null, // <-- sortDirection
null, // <-- filter
null, // <-- limit
this.nextToken.feedback == null ? this.nextToken.user : null, // <-- User nextToken: Only send if Feedback NextToken is null
this.nextToken.feedback // <-- Feedback nextToken
);
// Get User NextToken
this.nextToken.user = res.nextToken;
// Initialize Feedback NextToken as null
this.nextToken.feedback = null;
// Loop Users in the response
res.items.map((user) => {
// Get Feedback NextToken from User if it is not null (Or else last User in the list could overrite it)
if (user.feedbacks.nextToken) {
this.nextToken.feedback = user.feedbacks.nextToken;
}
// Push the feedback items into the list to diplay in UI
this.feedbacks.push(...user.feedbacks.items);
});
} catch (error) {
this.handleError.show(error);
}
}
Lastly I added a Load More button in the UI which calls listFeedbacks() function. So if there is any Feedback NextToken, I send it to the API. (Note that multiple user feedbacks can have a nextToken).
If all feedbacks are ok and if there is a User NextToken, I send that to the API and repeat the process for new Users.
I believe this could be much simpler with an SQL setup, but this will work for now. I hope it helps others in my situation. And if there is any ideas to make this better I'm all ears.
just want to know the steps to select different region and zones for api in same project on google cloud console.
I have already tried setting the default location and region
but want to select it everytime api is enabled
There is no feature to choose the location of an API but you can set the location/region when creating a instance of every Google Cloud Products or Services like App Engine, Cloud Function, Compute Engine and etc.
Note that the selected location/region of some services like App Engine cannot be changed once you have deployed your app on it. The way to change it is to create a new project and select the preferred location.
If you are pertaining to this documentation about using the changed default location. I believe this is applicable only for Compute Engine resources. I would recommend that you should always check the default region and zone or the selected location settings when creating and managing your resources.
The default zone and region of compute engine are saved in the metadata, so you should change them or set them from there.
You should use the following API: projects.setCommonInstanceMetadata
https://cloud.google.com/compute/docs/reference/rest/v1/projects/setCommonInstanceMetadata
Example in node js:
async function addDefaultRegion(authClient, projectName) {
var request = {
project: projectName + "1",
resource: {
"items": [
{
"key": "google-compute-default-region",
"value": "europe-west1"
}
]
},
auth: authClient
};
compute.projects.setCommonInstanceMetadata(request, function(err, response) {
if (err) {
console.error(err);
return;
}
console.log(JSON.stringify(response, null, 2));
});
};
async function authorize() {
const auth = new google.auth.GoogleAuth({
scopes: ['https://www.googleapis.com/auth/cloud-platform']
});
return await auth.getClient();
}
DynamoDB operates best with a single table per application (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-general-nosql-design.html), yet AppSync by default breaks that rule by the way it auto-generates code from the GraphQL schema (that AWS recommends users allow the API to do). Therefore, to use AppSync with GraphQL while upholding DynamoDB's best practices (assuming DynamoDB is the sole data source for the GraphQL API), would this approach work?
First, create a blank DynamoDB table (TheTable in this example) and give it a partition key (partitionKey) and a sort key (sortKey).
Second, manually enforce every GraphQL type to be backed by that table (TheTable). This is where AppSync automatic code generation will go the other direction.
GraphQL schema:
type Pineapple {
partitionKey: String!
sortKey: String!
name: String!
}
# create varying types as long as they all map to the same table
type MachineGun {
partitionKey: String!
sortKey: String!
name: String!
}
input CreatePineappleInput {
partitionKey: String!
sortKey: String!
name: String!
}
type Mutation {
createPineapple(input: CreatePineappleInput!): Pineapple
}
Third, configure your own resolvers to handle the schema (again avoid auto-generated code):
Resolver:
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"partitionKey": $util.dynamodb.toDynamoDBJson($ctx.args.input.partitionKey),
"sortKey": $util.dynamodb.toDynamoDBJson($ctx.args.input.sortKey),
},
"attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args.input),
}
And when we run the mutation in the AppSync console:
GraphQL operation:
mutation createPineapple($createPineappleInput: CreatePineappleInput!) {
createPineapple(input: $createPineappleInput) {
name
}
}
{
"createPineappleInput": {
"partitionKey": "attraction123",
"sortKey": "meta",
"name": "Looking OK"
}
}
We get the result we hoped for:
{
"data": {
"createPineapple": {
"name": "Looking OK"
}
}
}
Is there a reason why this wouldn't achieve single-table efficiency using AppSync?
I'm not sure this statement is true
DynamoDB operates best with a single table per application
Do you mind sharing where you saw this?
DynamoDB does indeed work best if the table schema is built based on the application access patterns. That does not necessarily mean you must fit everything in one table.
I am trying to make an application using Loopback as my back-end. I already used loopback before, but right now I want to do something that I never done before.
What I want is simple, I will have 3 types of users, administrator, servicer and default. But, I need to restrict the access controls for each type of user; the administrator can request all my routes, but de default user for example can only request some routes that I will specify. The ACL part I know how to do, but I can't find anything explaining how to make each type of user a role and make it work.
Anyone can post here an example with at least two users and roles?
The first step is to persist the 2 new roles into your database, "administrator" and "servicer". You can either do this step manually or create a script you can reuse:
// commands/add_roles.js
let app = require('../server/server')
function createRole(name, description, done) {
app.models.Role.findOrCreate(
{where: {name: name}},
{name, description},
err => {
// TODO handle error
done && done()
}
)
}
createRole('administrator', 'Administrators have more control on the data', () => {
createRole('servicer', 'servicer description', process.exit)
})
Then, you associate a role to a user. Execute the code below whenever you desire, depending on your application.
app.models.Role.findOne({where: {name: 'administrator'}}, (err, role) => {
// TODO handle error
app.models.RoleMapping.findOrCreate({where: {principalId: user.id}}, {
roleId: role.id,
principalType: RoleMapping.USER,
principalId: user.id
}, function (err) {
// TODO handle error
// if no errors, user has now the role administrator
})
})
You can now use the roles "administrator" and "servicer" in your models' ACLs.