How to PutItem in dynamodb only if the item does not exist - amazon-web-services

I've got a table in dynamodb that only has a partition key, and I want to put an item in the db only if there isn't another item with the same partition key in the db.
I've tried using ConditionExpression attribute_not_exists to no avail.
The issue is that a certain item may not already exist in the db, so attribute_not_exists fails with an
Invalid ConditionExpression: Operator or function requires a document path; operator or function: attribute_not_exists
Edit:
Can you please post the full condition expression you used + the name of the table's partition key? –
Key: {
username: event.pathParameters.username
},
ExpressionAttributeValues: {
":username": event.pathParameters.username
},
Item: {
userId: event.requestContext.identity.cognitoIdentityId
},
ConditionExpression: "attribute_not_exists(:username)"

I just tried it with an attributes_not_exists condition and it seems to work as expected:
$ aws create-table --table-name example1 \
--attribute-definitions AttributeName=pk,AttributeType=S \
--key-schema AttributeName=pk,KeyType=HASH --billing-mode PAY_PER_REQUEST
...
$ aws dynamodb put-item --table-name example1 \
--item '{"pk": {"S": "abc"}, "city": {"S": "NYC"}}'
$ aws scan --table-name example1
{
"Items": [
{
"city": {
"S": "NYC"
},
"pk": {
"S": "abc"
}
}
],
"Count": 1,
"ScannedCount": 1,
"ConsumedCapacity": null
}
$ aws dynamodb put-item --table-name example1 \
--item '{"pk": {"S": "abc"}, "city": {"S": "SF"}}' \
--condition-expression "attribute_not_exists(pk)"
An error occurred (ConditionalCheckFailedException) ...
$
Why did your request fail?
Based on the request you posted, I believe the culprit is your condition expression.
Instead of "attribute_not_exists(:username)" it should be attribute_not_exists(username). the : prefix denotes a value place holder whereas the attribute_not_exists function does not need a value, it needs an attribute name. Once you make the change you will also need to remove the ExpressionAttributeValues field because the value placeholder it defines (namely: :username) is no longer used anywhere in the request.
So, to summarize, this request should work for you:
Key: {
username: event.pathParameters.username
},
Item: {
userId: event.requestContext.identity.cognitoIdentityId
},
ConditionExpression: "attribute_not_exists(username)"
One last (super minor) comment: the request you posted looks like an update request. I believe that for your use case you can use put which needs a somewhat simpler request. Specifically, in put you just specify the entire item, you do not need to specify the key separately from the other attributes. Note that just like update, put also supports a ConditionExpression which is the critical piece in the solution.

Related

AWS DynamoDB Attribute Names containing Spaces

I have an AWS DynamoDB table that is populated via a Lambda script triggered by a web form.
The table ends up with Attribute Names like "Full Name" and "Phone Number".
From the aws CLI in PowerShell I can run:
aws dynamodb scan --table-name cc_career --filter-expression 'ID = :t' --expression-attribute-values '{\":t\":{\"N\":\"12\"}}'
and it will return expected values (Attribute Name = ID, value = 12).
But if I want to filter on the attribute "Full Name", for example:
aws dynamodb scan --table-name cc_career --filter-expression 'Full Name = :t' --expression-attribute-values '{\":t\":{\"S\":\"Sherman Horton\"}}'
I get
An error occurred (ValidationException) when calling the Scan operation: Invalid FilterExpression: Syntax error; token: "Name", near: "Full Name ="
How does one properly escape or specify an Attribute Name that contains a space?
I read up on using "expression attribute names" from the docs. But even this example:
aws dynamodb scan --table-name cc_career --return-consumed-capacity "TOTAL" --projection-expression "#fn,#dt" --expression-attribute-names '{\"#fn\":\"Email\",\"#dt\":\"Full Name\"}'
will execute without error BUT not return the "Full Name" data.
I did a pretty thorough 'net search on this topic but found nothing. Surely it's a common use case!
You are right about using an expression attribute names
If an attribute name begins with a number, contains a space or contains a reserved word, you must use an expression attribute name to replace that attribute's name in the expression.
aws dynamodb scan --table-name cc_career --return-consumed-capacity "TOTAL" --projection-expression "#fn,#dt" --expression-attribute-names '{\"#fn\":\"Email\",\"#dt\":\"Full Name\"}'
The problem here is that you are missing filter expression https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html#Scan.FilterExpression
Considering the table design with ID as PK and Full Name and Phone Number using below command I was able to scan Data based on Full Name
aws dynamodb scan --table-name stack_overflow --filter-expression "#fn = :t" --expression-attribute-values '{":t":{"S":"Jatin Mehrotra"}}' --expression-attribute-name '{"#fn":"Full Name"}
My results after running above command
{
"Items": [
{
"ID": {
"N": "1"
},
"Full Name": {
"S": "Jatin Mehrotra"
},
"Phone Number": {
"S": "123456789"
}
}
],
"Count": 1,
"ScannedCount": 1,
"ConsumedCapacity": null
}

AWS Step Function get all Range keys with a single primary key in DynamoDB

I'm building AWS Step Function state machines. My goal is to read all Items from a DynamoDB table with a specific value for the Hash key (username) and any (wildcard) Sort keys (order_id).
Basically something that would be done from SQL with:
SELECT username,order_id FROM UserOrdersTable
WHERE username = 'daffyduck'
I'm using Step Functions to get configurations for AWS Glue jobs from a DynamoDB table and then spawn a Glue job via step function's Map for each dynamodb item (with parameters read from the database).
"Read Items from DynamoDB": {
"Type": "Task",
"Resource": "arn:aws:states:::dynamodb:getItem",
"Parameters": {
"TableName": "UserOrdersTable",
"Key": {
"username": {
"S": "daffyduck"
},
"order_id": {
"S": "*"
}
}
},
"ResultPath": "$",
"Next": "Invoke Glue jobs"
}
But I can't bring the state machine to read all order_id's for the user daffyduck in the step function task above. No output is displayed using the code above, apart from http stats.
Is there a wildcard for order_id ? Is there another way of getting all order_ids? The query customization seems to be rather limited inside step functions:
https://docs.amazonaws.cn/amazondynamodb/latest/APIReference/API_GetItem.html#API_GetItem_RequestSyntax
Basically I'm trying to accomplish what can be done from the command line like so:
$ aws dynamodb query \
--table-name UserOrdersTable \
--key-condition-expression "Username = :username" \
--expression-attribute-values '{
":username": { "S": "daffyduck" }
}'
Any ideas? Thanks
I don't think that is possible with Step functions Dynamodb Service yet.
currently supports get, put, delete & update Item, not query or scan.
For GetItem we need to pass entire KEY (Partition + Range Key)
For the primary key, you must provide all of the attributes. For
example, with a simple primary key, you only need to provide a value
for the partition key. For a composite primary key, you must provide
values for both the partition key and the sort key.
We need to write a Lambda function to query Dynamo and return a map and invoke the lambda function from step.

how to write aws dynamodb scan filter

I have one dynamodb table and one record is like belwo:
{
"Items": [
{
"toObjectId": {
"S": "5678"
},
"keyValue": {
"S": "7890"
},
"aTypeId": {
"S": "test"
},
"aws:rep:deleting": {
"BOOL": false
},
"currentAt": {
"N": "1582476260000"
},
"keyTypeId": {
"S": "test2"
},
"aId": {
"S": "1234"
},
"aTypeId_keyTypeId_keyValue_currentAt": {
"S": "test_test2_7890_1582476260000"
},
"fromObjectId": {
"S": "12345"
},
}
],
"Count": 2562,
"ScannedCount": 2562,
"ConsumedCapacity": null
}
How can I write one aws dynamodb scan/query filter with aws cli to just get aTypeId and aId when aTypeId is "test"?
And
Primary partition key is aId (String)
Primary sort key is aTypeId_keyTypeId_keyValue_currentAt (String)
I have tried below but no lucky with it
aws dynamodb query \
--table-name test \
--key-condition-expression "aTypeId = :aTypeId" \
--expression-attribute-values '{
":aTypeId": { "S": "test" }
}'
You field is not in a key or GSI (Global secondary index), then I think you have to use scan method to get object by aTypeId,
The query will like:
aws dynamodb scan \
--table-name test \
--filter-expression "aTypeId = :typeValue" \
--projection-expression "aTypeId, aId" \
--expression-attribute-values '{":typeValue":{"S":"test"}}'
If you get back result with LastEvaluatedKey value, this mean you need take one or more query to get all data:
aws dynamodb scan \
--table-name test \
--filter-expression "aTypeId = :typeValue" \
--projection-expression "aTypeId, aId" \
--starting-token VALUE_NEXT_TOKEN_OF_LAST_QUERY \
--expression-attribute-values '{":typeValue":{"S":"test"}}'
But, I recommended that create a GSI with aTypeId is hash key will be better.
Since aId is a Primary Key and you are trying to search for it, you can only scan through the table (which is might be very expensive operation). Try below command to scan the item
aws dynamodb scan \
--table-name test \
--projection-expression "aTypeId, aId" \
--filter-expression "aTypeId = :aTypeId" \
--expression-attribute-values '{":aTypeId":{"S":"test"}}'
As a precaution, a single Scan request will only return up to 1MB size of data. If there is anymore data to scan, the response from the request will append LastEvaluatedKey and that tells you to run Scan request again to find the item, but this time with --starting-token and the value is exactly LastEvaluatedKey
If this type of operation is a routine in your case, it will be better to use Global Secondary Index and use aTypeId as Primary Key

In aws-cli, how to delete items from DynamoDB based on a condition?

I'm using AWS CLI and having issues deleting rows based on a condition
I have a file_name key and I'd like to remove all keys containing "20200322".
Command
aws dynamodb delete-item \
--table-name "logs" \
--condition-expression "contains(file_name, :file_name)" \
--expression-attribute-names file://expression.json \
--key file://key.json
expression.json - the variables to use in the contains
{
":file_name": {
"S": "20200322"
}
}
key.json - I don't understand the point of this file
{
"file_name": {
"S": "20200322"
}
}
Error
Parameter validation failed: Invalid type for parameter
ExpressionAttributeNames.:file_name, value: OrderedDict([(u'S',
u'20200322')]), type: , valid types:
Questions
How do I delete a single entry based on a contains condition?
Why is the key mandatory if I'm using the --expression-attribute-names switch? What does the key need to be?
What is the difference between --expression-attribute-values and --expression-attribute-names
Reference
DynamoDB > Condition Expressions
DynamoDB > API_DeleteItem_RequestSyntax
contains function takes 2 parameter: a path and the operand
contains (path, operand)
Here you're missing the operand.
aws dynamodb delete-item \
--table-name "logs" \
--key '{"file_name": {"S": "20200322"}}'
--condition-expression "contains(file_name, :file_name)" \
--expression-attribute-values file://wvid_logs.json
Note there is double quotes within a pair of single quote.
and in the JSON should be something like
{
":file_name": {
"S": "20200322"
}
}
The thing is that you want to run a conditional delete, so the key needs to be the key of your item you want to delete, and the expression attribute values will be the condition to check, I am not fully sure you can run a condition on the key itself.
Lets suppose you have
{
"Id": {
"N": "12345"
}
"file_name": {
"S": "20200322"
}
}
running the command
aws dynamodb delete-item \
--table-name "logs" \
--key '{"Id": {"N": "12345"}}'
--condition-expression "contains(file_name, :file_name)" \
--expression-attribute-values file://wvid_logs.json
The command will delete the item only when the condition from the file matches the item. so if in file you have
{
":file_name": {
"S": "20200322"
}
}
It will delete the item, any other value in your JSON file will not delete the item.
I had a similar issue, in my case I found (after a long while) that the key property (and all map properties) should be enclosed in single quotes when input directly in the terminal
aws dynamodb delete-item \
--table-name "logs" \
--condition-expression "contains(file_name, :file_name)" \
--expression-attribute-names file://expression.json \
--key '{":file_name": {"S": "20200322"}}'

how to return items in a dynamodb on aws-cli

So, I have a DynamoDB table Users and I want to return all the contents of this table. Or maybe even some.
I tried
aws dynamodb query --table-name Users
and it says I have to specify key-condition or key-condition-expression, so I added the following:
aws dynamodb query --table-name Users --key-condition-expression Username = "test"
and it returns an error message " Unknown options: test ".
If you want to dump the whole table, just use
aws dynamodb scan --table-name Users
Try this format:
aws dynamodb get-item --table-name Users --key '{"Username": {"S": "test"}}'
Since the question is about using the query operation, here it goes.
As the AWS cli documentation explains, you should separate the attribute values from the condition, by using the --expression-attribute-values parameter:
aws dynamodb query --table-name Users
--key-condition-expression "Username = :v1"
--expression-attribute-values "{ \":v1\" : { \"S\" : \"test\" } }"
Additionally, you may combine more attributes in the filter (in my case I have a Datetime sort key I want to filter by):
aws dynamodb query
--table-name Users
--key-condition-expression "Username = :v1 AND #Datetime BETWEEN :v2 AND :v3"
--expression-attribute-names "{ \"#Datetime\": \"Datetime\" }"
--expression-attribute-values "{ \":v1\" : { \"S\" : \"test\" }, \":v2\" : { \"S\" : \"2019-06-06\" }, \":v3\" : { \"S\" : \"2019-06-07\" } }"
Here the #Datetime mapping is done through the --expression-attribute-names parameter, because Datetime is a reserved keyword, so I cannot use it inside the key condition expression.
As per my understanding you are not passing "key"(hash or hash/range) properly
create a file containing your keys:
test.json
{
"userName": {"S": "abc"},
"anyRangeKey": {"S": "xyz"} //optional
}
Run
aws dynamodb get-item --table-name users --key file://test.json
refer:http://docs.aws.amazon.com/cli/latest/reference/dynamodb/get-item.html
Hope that helps
aws dynamodb get-item --table-name ProductCatalog --key "{""Id"":{""N"":""205""}}" --no-verify-ssl