I have a DynamoDB's Table called "ZombieSession" and the "SessionId" primary key with "S" type.
The local service is running in http://localhost:8181.
For local tests, I'm trying execute these commands:
(1)
aws dynamodb delete-item --table-name ZombieSession --key
'4ae40a08-007c-4785-babd-caff0ed12d1d' --endpoint-url
http://localhost:8181 --region us-east-1
That results in:
Error parsing parameter '--key': Invalid JSON:
'4ae40a08-007c-4785-babd-caff0ed12d1d'
and
(2)
aws dynamodb delete-item --table-name ZombieSession --key
'{"SessionId":{"S":"4ae40a08-007c-4785-babd-caff0ed12d1d"}}'
--endpoint-url http://localhost:8181 --region us-east-1
That results in:
Error parsing parameter '--key': Invalid JSON:
'{SessionId:{S:4ae40a08-007c-4785-babd-caff0ed12d1d}}'
I don't found any documentation example about this.
What's the appropriate command for this operation?
I discovered that the value of --key parameter need have the quotation mark with escape:
aws dynamodb delete-item --table-name ZombieSession --key
"{\"SessionId\":{\"S\":\"4ae40a08-007c-4785-babd-caff0ed12d1d\"}}"
--endpoint-url http://localhost:8181 --region us-east-1
Related
I would like to get Step Function ARN using AWS CLI by name with wildcard strig or get Step Function ARN by Step Function name
Here is an example:
aws stepfunctions list-state-machines --region us-east-1
I got this:
{
"stateMachines": [
{
"stateMachineArn": "arn:aws:states:us-east-1:012345678912:stateMachine:firstStepFunc",
"name": "firstStepFunc",
"type": "STANDARD",
"creationDate": "2022-12-01T14:43:09.577000+01:00"
}
]
}
I tried this one:
aws stepfunctions list-state-machines --query 'stateMachines[*].stateMachineArn' --region us-east-1 --output text
And get expected result:
arn:aws:states:us-east-1:012345678912:stateMachine:firstStepFunc
But if Step Functions will be more than one, it won't work.
I need something like that, but I have no idea how to write query in proper way:
aws stepfunctions list-state-machines --query 'stateMachines[*].stateMachineArn[?stateMachineArn==`*`]' --region us-east-1
aws stepfunctions list-state-machines --query 'stateMachines[*].stateMachineArn[?name==`*`]' --region us-east-1
Thanks in advance!
You could use contains functions for this, for example:
aws stepfunctions list-state-machines --query 'stateMachines[?contains(name,`dev`)]|[*].stateMachineArn' --region us-east-1 --output text
The expression above returns the ARN of all stepfunctions which have the dev keyword in their name. If you want to get only one (the first one, for example), you can do the following:
aws stepfunctions list-state-machines --query 'stateMachines[?contains(name,`dev`)]|[0].stateMachineArn' --region us-east-1 --output text
I'm searching for values in DynamoDB like this:
aws dynamodb get-item --table-name table --key '{"name": {"S":"test"}}' --output text --query Item.value
If the item was not found, it prints None. How to avoid that and print an empty string instead?
You may want to have a default empty string value if nothing was found for the Item.value you are looking for:
aws dynamodb get-item --table-name table --key '{"name": {"S":"test"}}' --output text --query 'Item.value || ``'
You could re-direct to /dev/null?
aws dynamodb get-item --table-name table --key '{"name": {"S":"test"}}' --output text --query Item.value > /dev/null 2>&1
Be careful, because by doing so, you redirect both stdout and stderr to /dev/null. So if there is some error, you won't see it.
I am very new to aws. I was trying to create a database table by running following command:
``
aws dynamodb create-table --table-name pizza-order --attribute-definitions AttributeName=orderId, AttributeType=S --key-schema AttributeName=orderId,keyType=HASH --provisioned-throughput ReadCapacityUnit=1,WriteCapacityUnit=1 --region us-east-2 --query TableDescription.TableArn --output text
``
But I am getting a error like this:
Error parsing parameter '--attribute-definitions': Expected: '', received: '' for input:
AttributeName=orderId,
The command works with following modifications:
Remove space after orderId,
KeyType correctly capitalized
ReadCapacityUnits and WriteCapacityUnits correctly pluralized.
Here is the working command for your reference:
aws dynamodb create-table --table-name pizza-order --attribute-definitions AttributeName=orderId,AttributeType=S --key-schema AttributeName=orderId,KeyType=HASH --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 --region us-east-2 --query TableDescription.TableArn --output text
aws dynamodb create-table --table-name pizza-order --attribute-definitions AttributeName=orderId,AttributeType=S --key-schema AttributeName=orderId,KeyType=HASH --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 --region us-east-2 --query TableDescription.TableArn --output text
Docs for reference
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/getting-started-step-1.html
To verify that DynamoDB has finished creating the Music table, use the describe-table command.
aws dynamodb describe-table --table-name pizza-order | grep TableStatus
This command returns the following result. When DynamoDB finishes creating the table, the value of the TableStatus field is set to ACTIVE.
"TableStatus": "ACTIVE",
note:- whenever you are stuck or want to know the details of the command its a good practice to check aws cli doc for particular API for create table
I am trying to bulk update all s3 buckets with default encryption to that i generate a json file using below command
aws s3api list-buckets --query "Buckets[].Name" >> s3.json
My results was names of all s3 buckets.
How do i pass in that json file into the command so i can enable default encryption.
I also tried below
aws s3api list-buckets --query 'Buckets[*].[Name]' --output text | xargs -I {} bash -c 'aws s3api put-bucket-encryption --bucket {} --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}''
But iam getting below error
Error parsing parameter '--server-side-encryption-configuration': Invalid JSON: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
JSON received: {Rule
aws s3api put-bucket-encryption --bucket bucketnames --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
I have tried below but it does not work.
aws s3api put-bucket-encryption \
--bucket value \
--server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}' \
--cli-input-json file://s3bucket.json
Pleas let me know how to update my command to enable default encryption.
Below is the code snippet to solve your problem:
# Check if bucket is SSE enabled and then encrypt using SSE AES256:
#!/bin/bash
#List all buckets and store names in a array.
arr=(`aws s3api list-buckets --query "Buckets[].Name" --output text`)
# Check the status before encryption:
for i in "${arr[#]}"
do
echo "Check if SSE is enabled for bucket -> ${i}"
aws s3api get-bucket-encryption --bucket ${i} | jq -r .ServerSideEncryptionConfiguration.Rules[0].ApplyServerSideEncryptionByDefault.SSEAlgorithm
done
# Encrypt all buckets in your account:
for i in "${arr[#]}"
do
echo "Encrypting bucket with SSE AES256 for -> ${i}"
aws s3api put-bucket-encryption --bucket ${i} --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
done
aws s3api list-buckets --query "Buckets[].Name" \
| jq .[] \
| xargs -I '{}' aws s3api put-bucket-encryption --bucket {} --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
Worked for me
If you wanted to do it in Python it would be something like this (not tested!):
import boto3
s3_client = boto3.client('s3')
response = s3_client.list_buckets()
for bucket in response['Buckets']
s3_client.put_bucket_encryption(
Bucket=bucket,
ServerSideEncryptionConfiguration={
'Rules': [
{
'ApplyServerSideEncryptionByDefault': {
'SSEAlgorithm': 'AES256'
}
},
]
}
)
I have data in S3 that is partitioned by category and date as follows:
s3://mybucket/category=1/date=2018-08-30/data1.json
s3://mybucket/category=1/date=2018-08-31/data2.json
s3://mybucket/category=2/date=2018-08-30/data3.json
s3://mybucket/category=2/date=2018-08-31/data4.json
After running the crawler, I have two partition keys in my metadata table: one for category, the other for date. I want to retrieve partitions that match certain keys using the GetPartitions API so I began experimenting with the AWS CLI. If I run this command:
aws glue get-partitions --database-name mydb --table-name mytable --expression "category=1" --region us-west-2
I successfully retrieve the partition as expected. However, I tried the following command:
aws glue get-partitions --database-name mydb --table-name mytable --expression "category=1 AND date=2018-08-30" --region us-west-2
and the response was
An error occurred (InvalidInputException) when calling the
GetPartitions operation: Unsupported expression '2018 - 08 - 30'
Another command that produced this error was
aws glue get-partitions --database-name mydb --table-name mytable --expression category=1\ AND\ date=2018-08-30 --region us-west-2
I also tried modifying the call by using the following command:
aws glue get-partitions --database-name mydb --table-name mytable --expression "category=1 AND date=2018\-08\-30" --region us-west-2
which gave me the error
An error occurred (InvalidInputException) when calling the GetPartitions operation: Lexical error at line 1, column 35. Encountered: "\" (92), after : ""
Is the GetPartitions API able to handle expressions for partitions that contain hyphens? If so, what is the correct syntax?
Partitions that are initially generated by a crawler in AWS Glue will have type String in the metadata catalog. While some of my categories contained hyphens, they were in uuids (i.e. category=so36-fkw1-...) so they were not interpreted as expressions. On the other hand, the dates contain only numeric characters and - which was the root of the problem. I was able to fix it by enclosing the dates in singular quotes as follows:
aws glue get-partitions --database-name mydb --table-name mytable --expression category=1\ AND\ date=\'2018-08-30\' --region us-west-2