The provided key element does not match the schema - AWS - amazon-web-services

I have seen a similar question posted on stackoverflow but the answers weren't able to work to solve my issue
I have created a resource in my API GETWAY of type GET. In my Query strings I'm passing the follwing:
email=x#gmail.com
or
racf=XXXX&email=x#gmail.com
I get this error:
The provided key element does not match the schema
But if I do it with the primary key, it works.
racf=XXXX
I have created an index in dynamoDB for the email attribute
LAMBDA FUNCION:
case 'GET':
if (event.queryStringParameters) {
dynamo.getItem({
TableName: "eventregistration-db",
Key:{
//"racf": event.queryStringParameters.racf,
"email": event.queryStringParameters.email
}
},done);
} else {
dynamo.scan({ TableName: tableName }, done);
}
break;

It looks like email is not a part of your Primary Key of the DynamoDb table.
For getItem you have to use table columns the table's primary key (partition key and optionally sort key) is composed from.
scan doesn't need any key, because it makes a full search on the table - that's why it works in that case.
Set email (and racf) as table's primary key to make it work with .
If you want to use an index, you have to use query:
dynamo.query({
TableName: tableName,
IndexName: indexName,
KeyConditionExpression: "email = :email",
ExpressionAttributeValues: {
":email": event.queryStringParameters.email
}
}, done);

Related

Error "The provided key element does not match the schema" without sort key

I have a problem with a DynamoDB table.
I only have the partition key; when I request all the elements these are displayed correctly, while when I want to get only one element with specific id I get the following error:
The provided key element does not match the schema
Yet I only have the partition key (Primary Key). without the sort key.
This is the relative lambda function code:
case "GET /fragments/{id}":
body = await dynamo
.get({
TableName: "frammento",
Key: {
id: event.pathParameters.id
}
.promise();
break;
})
How can I fix this error?

DynamoDB - how to get the item attributes if ConditionExpression failed on "put"

I'm trying to save an item if the primary key doesn't exist. If this key already exists, I want to cancel the "put" and get the item that exists in the table.
I've noticed there is a field called "ReturnValuesOnConditionCheckFailure" in transactWrite that may be the solution:
Use ReturnValuesOnConditionCheckFailure to get the item attributes if the Put condition fails.
For ReturnValuesOnConditionCheckFailure, the valid values are: NONE and ALL_OLD.
It didn't work and I get an error of Transaction cancelled, please refer cancellation reasons for specific reasons [ConditionalCheckFailed].
This is what I tried to do so far:
docClient.transactWrite({
TransactItems: [{
Put: {
TableName: 'test.users',
Item: {
user_id,
name,
},
ConditionExpression: 'attribute_not_exists(user_id)',
ReturnValuesOnConditionCheckFailure: 'ALL_OLD',
}
}]
})
Any ideas?

appsync dynamodb won't return primary partition key

With thanks in advance as this is probably a 101 question - I can't find an answer anywhere.
I've set up what I think is a simple example of AppSync and DynamoDB.
In DynamoDB I have a categorys table, with items of the form
{
slug: String!,
nm: String,
nmTrail: String,
...
}
So - no id field. slug is he primary partition key, not null and expected to be unique (is unique in the data I've got loaded so far).
I've set up a simplified AppSync schema in line with the above definition and
a resolver...
{
"version": "2017-02-28",
"operation" : "GetItem",
"key" : {
"slug" : { "S" : "${context.arguments.slug}" }
}
}
A query such as
query GoGetOne {
getCategory(slug: "Wine") {
nm
}
}
Works fine - returning the nm value for the correct item in categorys - similarly I can add any of the other properties in categorys to return them (e.g. nmTrail) except slug.
If I add slug (the Primary Partition Key, a non-nullable String) to the result set then I get a DynamoDB:AmazonDynamoDBException of the provided key element does not match the schema (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException.
If I scan/query/filter the table in DynamoDB all is good.
Most of the AWS examples use an id: ID! field in the 'get one' examples and also ask for it as a returned item.
update1 in response to KDs request
My update mutation schema is:
type Mutation {
putCategory(
slug: String!,
nm: String,
nmTrail: String,
mainCategorySlug: String,
mainCategoryNm: String,
parentCategorySlug: String,
parentCategoryNm: String
): Category
}
No resolver associated with that and (obviously) therefore haven't used mutation to put anything yet - just trying to get batch uploaded data to begin with.
/update1
What am I missing?
I tried to reproduce your API as much as I could and it works for me.
Category DynamoDB table
Schema:
type Query {
getCategory(slug: String!): Category
}
type Category {
slug: String
nm: String
nmTrail: String
}
Resolver on Query.getCategory request template:
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"slug": $util.dynamodb.toDynamoDBJson($ctx.args.slug),
}
}
Resolver on Query.getCategory response template:
$util.toJson($ctx.result)
Query:
query GoGetOne {
getCategory(slug: "Wine") {
slug
nm
}
}
Results
{
"data": {
"getCategory": {
"slug": "Wine",
"nm": "Wine1-nm"
}
}
}

Map different Sort Key responses to Appsync Schema values

So here is my schema:
type Model {
PartitionKey: ID!
Name: String
Version: Int
FBX: String
# ms since epoch
CreatedAt: AWSTimestamp
Description: String
Tags: [String]
}
type Query {
getAllModels(count: Int, nextToken: String): PaginatedModels!
}
type PaginatedModels {
models: [Model!]!
nextToken: String
}
I would like to call 'getAllModels' and have all of it's data, and all of it's tags be filled in.
But here is the thing. Tags are stored via sort keys. Like so
PartionKey | SortKey
Model-0 | Model-0
Model-0 | Tag-Tree
Model-0 | Tag-Building
Is it possible to transform the 'Tag' sort keys into the Tags: [String] array in the schema via a DynamoDB resolver? Or must I do something extra fancy through a lambda? Or is there a smarter way to do this?
To clarify, are you storing objects like this in DynamoDB:
{ PartitionKey (HASH), Tag (SortKey), Name, Version, FBX, CreatedAt, Description }
and using a DynamoDB Query operation to fetch all rows for a given HashKey.
Query #PartitionKey = :PartitionKey
and getting back a list of objects some of which have a different "Tag" value and one of which is "Model-0" (aka the same value as the partition key) and I assume that record contains all other values for the record. E.G.
[
{ PartitionKey, Tag: 'ValueOfPartitionKey', Name, Version, FBX, CreatedAt, ... },
{ PartitionKey, Tag: 'Tag-Tree' },
{ PartitionKey: Tag: 'Tag-Building' }
]
You can definitely write resolver logic without too much hassle that reduces the list of model objects into a single object with a list of "Tags". Let's start with a single item and see how to implement a getModel(id: ID!): Model query:
First define the response mapping template that will get all rows for a partition key:
{
"version" : "2017-02-28",
"operation" : "Query",
"query" : {
"expression": "#PartitionKey = :id",
"expressionValues" : {
":id" : {
"S" : "${ctx.args.id}"
}
},
"expressionNames": {
"#PartitionKey": "PartitionKey" # whatever the table hash key is
}
},
# The limit will have to be sufficiently large to get all rows for a key
"limit": $util.defaultIfNull(${ctx.args.limit}, 100)
}
Then to return a single model object that reduces "Tag" to "Tags" you can use this response mapping template:
#set($tags = [])
#set($result = {})
#foreach( $item in $ctx.result.items )
#if($item.PartitionKey == $item.Tag)
#set($result = $item)
#else
$util.qr($tags.add($item.Tag))
#end
#end
$util.qr($result.put("Tags", $tags))
$util.toJson($result)
This will return a response like this:
{
"PartitionKey": "...",
"Name": "...",
"Tags": ["Tag-Tree", "Tag-Building"],
}
Fundamentally I see no problem with this but its effectiveness depends upon your query patterns. Extending this to the getAll use is doable but will require a few changes and most likely a really inefficient Scan operation due to the fact that the table will be sparse of actual information since many records are effectively just tags. You can alleviate this with GSIs pretty easily but more GSIs means more $.
As an alternative approach, you can store your Tags in a different "Tags" table. This way you only store model information in the Model table and tag information in the Tag table and leverage GraphQL to perform the join for you. In this approach have Query.getAllModels perform a "Scan" (or Query) on the Model table and then have a Model.Tags resolver that performs a Query against the Tag table (HK: ModelPartitionKey, SK: Tag). You could then get all tags for a model and later create a GSI to get all models for a tag. You do need to consider that now the nested Model.Tag query will get called once per model but Query operations are fast and I've seen this work well in practice.
Hope this helps :)

How to prevent a DynamoDB item being overwritten if an entry already exists

Im trying to write a lambda function to add new data to a DynamoDB Table.
From reading the docs at:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/DocumentClient.html#put-property
The PUT method: "Creates a new item, or replaces an old item with a new item by delegating to AWS.DynamoDB.putItem()."
Other than doing a check for an object before 'putting' is there a setting or flag to fail the object exists when the PUT is attempted?
I can see in
params -> Expected -> Exists (Bool)
but can't see any documentation on what this does.
What would be the best architecture (or fasted) to prevent an item overwrite?
Query the table first and if no item exists then add the item
or
Attempt to insert the item and on failure because of duplicate entry report this back? (Is there a way to prevent item overwrite?)
The ConditionExpression can be used to check whether the key attribute values already exists in table and perform the PUT operation only if the key values are not present in the table.
When you run the below code, first time the put operation should be successful. In the second run, the put operation should fail with "Conditional request failed" exception.
My movies table has both partition and sort keys. So, I have used both the attributes in conditional expression.
Sample code with conditional put:-
var table = "Movies";
var year = 1502;
var title = "The Big New Movie";
var params = {
TableName:table,
Item:{
"yearkey": year,
"title": title,
"info":{
"plot": "Nothing happens at all.",
"rating": 0
}
},
ConditionExpression: "yearkey <> :yearKeyVal AND #title <> :title",
ExpressionAttributeNames: {
"#title" : "title"
},
ExpressionAttributeValues: {
":yearKeyVal" : year,
":title": {"S": title}
}
};
console.log("Adding a new item...");
docClient.put(params, function(err, data) {
if (err) {
console.error("Unable to add item. Error JSON:", JSON.stringify(err, null, 2));
} else {
console.log("Added item:", JSON.stringify(data, null, 2));
}
});
Exception when put operation is performed second time:-
Unable to add item. Error JSON: {
"message": "The conditional request failed",
"code": "ConditionalCheckFailedException",
"time": "2017-10-02T18:26:26.093Z",
"requestId": "7ae3b0c4-3872-478d-908c-94bc9492a43a",
"statusCode": 400,
"retryable": false,
"retryDelay": 0
}
I see that this question relates to JavaScript language, anyway I will write also for Java (maybe it will be useful for someone):
DynamoDBSaveExpression saveExpression = new DynamoDBSaveExpression();
Map<String, ExpectedAttributeValue> expectedAttributes =
ImmutableMapParameter.<String, ExpectedAttributeValue>builder()
.put("hashKeyAttrName", new ExpectedAttributeValue(false))
.put("rangeKeyAttrName", new ExpectedAttributeValue(false))
.build();
saveExpression.setExpected(expectedAttributes);
saveExpression.setConditionalOperator(ConditionalOperator.AND);
try {
dynamoDBMapper.save(item, saveExpression);
} catch (ConditionalCheckFailedException e) {
e.printStackTrace();
}
ConditionalCheckFailedException will be thrown in case we will try to save item with already existing pair of hashKey and rangeKey in DynamoDB.
As an addition to the correct answer of notionquest, the recommended way to express a condition that makes a putItem fail in case the item already exists is to assert that the partition key is not present in any potentially existing item:
"ConditionExpression": "attribute_not_exists(pk)",
This reads "if this item already exists before the putItem, make sure it does not already have a partition key" (pk being the partition key in this example). Since it is impossible for an item to exist without a partition key, this effectively means "make sure this item does not already exist".
See also the "Note" block at the beginning of this page: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/put-item.html
There's an example here when used to guarantee uniqueness of non key columns:
https://aws.amazon.com/blogs/database/simulating-amazon-dynamodb-unique-constraints-using-transactions/
As and addition to #Svend solution full Java example (for AWS SDK v2, dynamodb-enchanced)
public <T> void insert(DynamoDbClient client, String tableName, T object, Class<T> clazz) {
DynamoDbEnhancedClient enhancedClient = DynamoDbEnhancedClient.builder().dynamoDbClient(client).build();
DynamoDbTable<T> table = enhancedClient.table(tableName, TableSchema.fromBean(clazz));
table.putItem(PutItemEnhancedRequest.builder(clazz)
.item(object)
.conditionExpression(Expression.builder()
.expression("attribute_not_exists(primaryKey)")
.build())
.build());
}
where
primaryKey
name of column on which uniqueness is checked
software.amazon.awssdk.services.dynamodb.model.ConditionalCheckFailedException
exception thrown on duplicated put