AWSCli dynamodb update-item command syntax - amazon-web-services

am using AmazonAwsCli to write a shell script to update an attribute in an item in a dynamodb table. I want to update an attribute in a table for multiple items. I am reading the attribute value from a file and am trying to update the table by injecting the value of the shell script variable in the command. The documentation available at http://docs.aws.amazon.com/cli/latest/reference/dynamodb/update-item.html suggests using separate json files for expression-attribute-names and expression-attribute-values. However, I do not want to create separate json files. Rather, I want to write one command to update an item for a given attribute value.
My table name = MY_TABLE_NAME
hashkey = AccountId
shell script variable holding the value of AccountId = accountId
attribute name that needs to be updated = Version
shell script variable holding the value of Version = ver
I have got something like :
aws dynamodb update-item --table-name MY_TABLE_NAME --key '{"AccountId": {"S": '$accountId'}}' --update-expression "SET Version = '{"Version": {"S": '$ver'}}'" --condition-expression "attribute_exists(Version)" --return-values UPDATED_NEW
But, the above command does not work. Can someone point me to the correct syntax.

My AwsCli version did not support --update-expression option. I used the attribute-updates option instead.
Here is my command :
updatedVersion=aws dynamodb update-item --table-name MY_TABLE_NAME --key '{"AccountId": {"S": '$accountId'}}' --attribute-updates '{"Version": {"Value": {"S": '$desiredVersion'},"Action": "PUT"}}' --return-values UPDATED_NEW | jq '.Attributes.RuleSetVersion.S'

Below is the update command with --update-expression
aws --region "us-east-1" dynamodb update-item \
--table-name "MY_TABLE_NAME" --key \
'{"Primary_Column_name":{"S":"Primary_Column_value"}}' \
--update-expression 'SET #H = :h' \
--expression-attribute-names '{"#H":"Column_name_to_change"}' \
--expression-attribute-values '{":h":{"S":"Changed_Column_value"}}'

The other answers will work very good on MAC and Linux.
If you want to run it on Windows, you need to use " quotes instead of ' and double quotes "" instead of a single quote "`"
Example:
aws dynamodb update-item --table-name MY_TABLE_NAME --key "{""PRIMARY_KEY_NAME"":{""S"":""PRIMARY_KEY_VALUE""}}" --update-expression "SET #G = :g" --expression-attribute-names "{""#G"":""COLUMN_NAME_TO_UPDATE_VALUE""}" --expression-attribute-values "{"":g"":{""N"":""DESIRED_VALUE""}}"

Related

How to get the full results of a query to CSV file using AWS/Athena from CLI?

I need to download a full table content that I have on my AWS/Glue/Catalog using AWS/Athena. At the moment what I do it is running a select * from my_table from the Dashboard and saving the result locally as CSV always from Dashboard. Is there a way to get the same result using AWS/CLI?
From the documentation I can see https://docs.aws.amazon.com/cli/latest/reference/athena/get-query-results.html but it is not quite what I need.
You can run an Athena query with AWS CLI using the aws athena start-query-execution API call. You will then need to poll with aws athena get-query-execution until the query is finished. When that is the case the result of that call will also contain the location of the query result on S3, which you can then download with aws s3 cp.
Here's an example script:
#!/usr/bin/env bash
region=us-east-1 # change this to the region you are using
query='SELECT NOW()' # change this to your query
output_location='s3://example/location' # change this to a writable location
query_execution_id=$(aws athena start-query-execution \
--region "$region" \
--query-string "$query" \
--result-configuration "OutputLocation=$output_location" \
--query QueryExecutionId \
--output text)
while true; do
status=$(aws athena get-query-execution \
--region "$region" \
--query-execution-id "$query_execution_id" \
--query QueryExecution.Status.State \
--output text)
if [[ $status != 'RUNNING' ]]; then
break
else
sleep 5
fi
done
if [[ $status = 'SUCCEEDED' ]]; then
result_location=$(aws athena get-query-execution \
--region "$region" \
--query-execution-id "$query_execution_id" \
--query QueryExecution.ResultConfiguration.OutputLocation \
--output text)
exec aws s3 cp "$result_location" -
else
reason=$(aws athena get-query-execution \
--region "$region" \
--query-execution-id "$query_execution_id" \
--query QueryExecution.Status.StateChangeReason \
--output text)
echo "Query $query_execution_id failed: $reason" 1>&2
exit 1
fi
If your primary work group has an output location, or you want to use a different work group which also has a defined output location you can modify the start-query-execution call accordingly. Otherwise you probably have an S3 bucket called aws-athena-query-results-NNNNNNN-XX-XXXX-N that has been created by Athena at some point and that is used for outputs when you use the UI.
You cannot save results from the AWS CLI, but you can Specify a Query Result Location and Amazon Athena will automatically save a copy of the query results in an Amazon S3 location that you specify.
You could then use the AWS CLI to download that results file.

Groovy script issue with escaping quotes

I'm running this shell command using groovy (which worked in bash):
aws --profile profileName --region us-east-1 dynamodb update-item --table-name tableName --key '{"group_name": {"S": "group_1"}}' --attribute-updates '{"attr1": {"Value": {"S": "STOP"},"Action": "PUT"}}'
This updates the value of an item to STOP in DynamoDB. In my groovy script, I'm running this command like so:
String command = "aws --profile profileName --region us-east-1 dynamodb update-item --table-name tableName --key '{\"group_name\": {\"S\": \"group_1\"}}' --attribute-updates '{\"attr1\": {\"Value\": {\"S\": \"STOP\"},\"Action\": \"PUT\"}}'"
println(command.execute().text)
When I run this with groovy afile.groovy, nothing is printed out and when I check the table in DynamoDB, it's not updated to STOP. There is something wrong with the way I'm escaping the quotes but I'm not sure what. Would appreciate any insights.
Sidenote: When I do a simple aws command like aws s3 ls it works and prints out the results so it's something with this particular command that is throwing it off.
You don't quote for groovy (and the underlying exec) -- you would have to quote for your shell. The execute() on a String does not work like a shell - the underlyting code just splits at whitespace - any quotes are just passed down as part of the argument.
Use ["aws", "--profile", profile, ..., "--key", '{"group_name": ...', ...].execute() and ignore any quoting.
And instead of banging strings together to generate JSON, use groovy.json.JsonOutput.toJson([group_name: [S: "group_1"]])

How to add an index from command line to DynamoDB after table was created

Could you please point me to an appropriate documentation topic or provide an example how to add index to DynamoDB as far as I couldn't find any related info.
According to this blog: http://aws.amazon.com/blogs/aws/amazon-dynamodb-update-online-indexing-reserved-capacity-improvements/?sc_ichannel=em&sc_icountry=global&sc_icampaigntype=launch&sc_icampaign=em_130867660&sc_idetail=em_1273527421&ref_=pe_411040_130867660_15 it seems to be possible to do it with UI, however there are no mentions about CLI interface usages.
Thanks in advance,
Yevhenii
The aws command has help for every level of subcommand. For example, you can run aws help to get a list of all service names and discover the name dynamodb. Then you can aws dynamodb help to find the list of DDB commands and find that update-table is a likely culprit. Finally, aws dynamodb update-table help shows you the flags needed to add a global secondary index.
The AWS CLI documentation is really poor and lacks examples. Evidently AWS is promoting the SDK or the console.
This should work for updating
aws dynamodb update-table --table-name Test \
--attribute-definitions AttributeName=City,AttributeType=S AttributeName=State,AttributeType=S \
--global-secondary-index-updates \
"Create={"IndexName"="state-index", "KeySchema"=[ {"AttributeName"="State", "KeyType"="HASH" }], "Projection"={"ProjectionType"="INCLUDE", "NonKeyAttributes"="City"}, "ProvisionedThroughput"= {"ReadCapacityUnits"=1, "WriteCapacityUnits"=1} }"
Here's a shell function to do this that sets the R/W caps, and optionally handles --global-secondary-index-updates if an index name is provided
dynamodb_set_caps() {
# [ "$1" ] || fail_exit "Missing table name"
# [ "$3" ] || fail_exit "Missing read capacity"
# [ "$3" ] || fail_exit "Missing write capacity"
if [ "$4" ] ; then
aws dynamodb update-table --region $region --table-name ${1} \
--provisioned-throughput ReadCapacityUnits=${2},WriteCapacityUnits=${3} \
--global-secondary-index-updates \
"Update={"IndexName"="${4}", "ProvisionedThroughput"= {"ReadCapacityUnits"=${2}, "WriteCapacityUnits"=${3}} }"
else
aws dynamodb update-table --region $region --table-name ${1} \
--provisioned-throughput ReadCapacityUnits=${2},WriteCapacityUnits=${3}
fi
}
Completely agree that the aws docs are lacking in this area
Here is reference for creating a global secondary index:
https://docs.aws.amazon.com/pt_br/amazondynamodb/latest/developerguide/getting-started-step-6.html
However the example only provides the creation of an index for a single primary key.
This code helped me to create a global secondary index for a composite primary key:
aws dynamodb update-table \
--table-name YourTableName \
--attribute-definitions AttributeName=GSI1PK,AttributeType=S \
AttributeName=GSI1SK,AttributeType=S \
AttributeName=createdAt,AttributeType=S \
--global-secondary-index-updates \
"[{\"Create\":{\"IndexName\": \"GSI1\",\"KeySchema\":[{\"AttributeName\":\"GSI1PK\",\"KeyType\":\"HASH\"},{\"AttributeName\":\"GSI1SK\",\"KeyType\":\"RANGE\"}], \
\"ProvisionedThroughput\": {\"ReadCapacityUnits\": 5, \"WriteCapacityUnits\": 5 },\"Projection\":{\"ProjectionType\":\"ALL\"}}}]" --endpoint-url http://localhost:8000
A note in the bottom line considers that you are creating this index in your local database. If not, just delete it.

How to Remove Delete Markers from Multiple Objects on Amazon S3 at once

I have an Amazon S3 bucket with versioning enabled. Due to a misconfigured lifecycle policy, many of the objects in this bucket had Delete Markers added to them.
I can remove these markers from the S3 console to restore the previous versions of these objects, but there are enough objects to make doing this manually on the web console extremely time-inefficient.
Is there a way to find all Delete Markers in an S3 bucket and remove them, restoring all files in that bucket? Ideally I would like to do this from the console itself, although I will happily write a script or use the amazon CLI tools to do this if that's the only way.
Thanks!
Use this to restore the files inside the specific folder. I've used aws cli commands in my script. Provide input as:
sh scriptname.sh bucketname path/to/a/folder
**Script:**
#!/bin/bash
#please provide the bucketname and path to destination folder to restore
# Remove all versions and delete markers for each object
aws s3api list-object-versions --bucket $1 --prefix $2 --output text |
grep "DELETEMARKERS" | while read obj
do
KEY=$( echo $obj| awk '{print $3}')
VERSION_ID=$( echo $obj | awk '{print $5}')
echo $KEY
echo $VERSION_ID
aws s3api delete-object --bucket $1 --key $KEY --version-id $VERSION_ID
done
Edit: put $VERSION_ID in correct position in the script
Here's a sample Python implementation:
import boto3
import botocore
BUCKET_NAME = 'BUCKET_NAME'
s3 = boto3.resource('s3')
def main():
bucket = s3.Bucket(BUCKET_NAME)
versions = bucket.object_versions
for version in versions.all():
if is_delete_marker(version):
version.delete()
def is_delete_marker(version):
try:
# note head() is faster than get()
version.head()
return False
except botocore.exceptions.ClientError as e:
if 'x-amz-delete-marker' in e.response['ResponseMetadata']['HTTPHeaders']:
return True
# an older version of the key but not a DeleteMarker
elif '404' == e.response['Error']['Code']:
return False
if __name__ == '__main__':
main()
For some context for this answer see:
https://docs.aws.amazon.com/AmazonS3/latest/dev/DeleteMarker.html
If you try to get an object and its current version is a delete
marker, Amazon S3 responds with:
A 404 (Object not found) error
A response header, x-amz-delete-marker: true
The response header tells you that the object accessed was a delete
marker. This response header never returns false; if the value is
false, Amazon S3 does not include this response header in the
response.
The only way to list delete markers (and other versions of an object)
is by using the versions subresource in a GET Bucket versions request.
A simple GET does not retrieve delete marker objects.
Unfortunately, despite what is written in https://github.com/boto/botocore/issues/674, checking if ObjectVersion.size is None is not a reliable way to determine if a version is a delete marker as it will also be true for previously deleted versions of folder keys.
Currently, boto3 is missing a straightforward way to determine if an ObjectVersion is a DeleteMarker. See https://github.com/boto/boto3/issues/1769
However, ObjectVersion.head() and .Get() operations will throw an exception on an ObjectVersion that is a DeleteMarker. Catching this exception is likely the only reliable way of determining if an ObjectVersion is a DeleteMarker.
I just wrote a program (using boto) to solve the same problem:
from boto.s3 import deletemarker
from boto.s3.connection import S3Connection
from boto.s3.key import Key
def restore_bucket(bucket_name):
bucket = conn.get_bucket(bucket_name)
for version in bucket.list_versions():
if isinstance(version, deletemarker.DeleteMarker) and version.is_latest:
bucket.delete_key(version.name, version_id=version.version_id)
If you need to restore folders within the versioned buckets, the rest of the program I wrote can be found here.
Define variables
PROFILE="personal"
REGION="eu-west-1"
BUCKET="mysql-backend-backups-prod"
Delete DeleteMarkers at once
aws --profile $PROFILE s3api delete-objects \
--region $REGION \
--bucket $BUCKET \
--delete "$(aws --profile $PROFILE s3api list-object-versions \
--region $REGION \
--bucket $BUCKET \
--output=json \
--query='{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}}')"
Delete versions at once
aws --profile $PROFILE s3api delete-objects \
--region $REGION \
--bucket $BUCKET \
--delete "$(aws --profile $PROFILE s3api list-object-versions \
--region $REGION \
--bucket $BUCKET \
--output=json \
--query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"
And delete S3 bucket afterward
aws --profile $PROFILE s3api delete-bucket \
--region $REGION \
--bucket $BUCKET
You would need to write a program to:
Loop through all objects in the Amazon S3 bucket
Retrieve the version IDs for each version of each object
Delete the delete markers
This could be done fairly easily using the SDK, such as boto.
The AWS Command-Line Interface (CLI) can also be used, but you would have to build a script around it to capture the IDs and then delete the markers.
I have been dealing with this problem a few weeks ago.
Finally I managed to generate a function in PHP that deletes the 'deleted markers' of the latest version of the files within a prefix.
Personally, it worked perfectly and, in a pass of this script, iterating through all the prefixes, I managed to mend my own error by deleting many s3 objects unintentionally.
I leave my implementation in PHP below :
private function restore_files($file)
{
$storage = get_storage()->getDriver()->getAdapter()->getClient();
$bucket_name = 'my_bucket_name';
$s3_path=$file->s3_path;
$restore_folder_path = pathinfo($s3_path, PATHINFO_DIRNAME);
$data = $storage->listObjectVersions([
'Bucket' => $bucket_name,
'Prefix' => $restore_folder_path,
]);
$data_array = $data->toArray();
$deleteMarkers = $data_array['DeleteMarkers'];
foreach ($deleteMarkers as $key => $delete_marker) {
if ($delete_marker["IsLatest"]) {
$objkey = $delete_marker["Key"];
$objVersionId = $delete_marker["VersionId"];
$delete_response = $storage-> deleteObjectAsync([
'Bucket' => $bucket_name,
'Key' => $objkey,
'VersionId' => $objVersionId
]);
}
}
}
Some considerations about the script:
The code was implemented using Laravel Framework so, in the variable $storage, i get the PHP SDK alone, without using all the laravel's wrapper. So, $storage varible is the Client Object of the S3 SDK. Here is the documentation that I have used.
The $file parameter that the function receives, is an objected that has the s3_path in their's properties. So, in the $restore_folder_path varible, I get the prefix of the object s3 path.
Finally, i get all the objects inside the prefix in s3. I iterate over the DeleteMarkers list, and ask if the current object is the lasted deleted marker. If it is, i make a post to deleteObject function with the specif id of the object that i want to delete it's deleted marker. This is the way s3 documentation specify to remove the deleted marker
Most of the above versions are very slow on large buckets as they use delete-object rather than delete-objects. Here a variant on the bash version which uses awk to issue 100 requests at a time:
Edit: just saw #Viacheslav's version which also uses delete-objects and is nice and clean, but will fail with large numbers of markers due to line length issues.
#!/bin/bash
bucket=$1
prefix=$2
aws s3api list-object-versions \
--bucket "$bucket" \
--prefix "$prefix" \
--query 'DeleteMarkers[][Key,VersionId]' \
--output text |
awk '{ acc = acc "{Key=" $1 ",VersionId=" $2 "}," }
NR % 100 == 0 {print "Objects=[" acc "],Quiet=False"; acc="" }
END { print "Objects=[" acc "],Quiet=False" }' |
while read batch; do
aws s3api delete-objects --bucket "$bucket" --delete "$batch" --output text
done
Set up a life cycle rule to remove them after a certain days. Otherwise it will cost you 0.005$ per 1000 Object Listing.
So most efficient way is setting up a lifecycle rule.
Here is the step by step method.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html
I checked the file size.
Marker size is 'None'
Remove all Marker.
import boto3
default_session=boto3.session.Session(profile_name="default")
s3_re=default_session.resource(service_name="s3", region_name="ap-northeast-2")
for each_bucket in s3_re.buckets.all():
bucket_name = each_bucket.name
s3 = boto3.resource('s3')
bucket = s3.Bucket(bucket_name)
version = bucket.object_versions
for ver in version.all():
if str(ver.size) in 'None':
delete_file = ver.delete()
print(delete_file)
else:
pass

How do I delete a versioned bucket in AWS S3 using the CLI?

I have tried both s3cmd:
$ s3cmd -r -f -v del s3://my-versioned-bucket/
And the AWS CLI:
$ aws s3 rm s3://my-versioned-bucket/ --recursive
But both of these commands simply add DELETE markers to S3. The command for removing a bucket also doesn't work (from the AWS CLI):
$ aws s3 rb s3://my-versioned-bucket/ --force
Cleaning up. Please wait...
Completed 1 part(s) with ... file(s) remaining
remove_bucket failed: s3://my-versioned-bucket/ A client error (BucketNotEmpty) occurred when calling the DeleteBucket operation: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
Ok... how? There's no information in their documentation for this. S3Cmd says it's a 'fully-featured' S3 command-line tool, but it makes no reference to versions other than its own. Is there any way to do this without using the web interface, which will take forever and requires me to keep my laptop on?
I ran into the same limitation of the AWS CLI. I found the easiest solution to be to use Python and boto3:
#!/usr/bin/env python
BUCKET = 'your-bucket-here'
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket(BUCKET)
bucket.object_versions.delete()
# if you want to delete the now-empty bucket as well, uncomment this line:
#bucket.delete()
A previous version of this answer used boto but that solution had performance issues with large numbers of keys as Chuckles pointed out.
Using boto3 it's even easier than with the proposed boto solution to delete all object versions in an S3 bucket:
#!/usr/bin/env python
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('your-bucket-name')
bucket.object_versions.all().delete()
Works fine also for very large amounts of object versions, although it might take some time in that case.
You can delete all the objects in the versioned s3 bucket.
But I don't know how to delete specific objects.
$ aws s3api delete-objects \
--bucket <value> \
--delete "$(aws s3api list-object-versions \
--bucket <value> | \
jq '{Objects: [.Versions[] | {Key:.Key, VersionId : .VersionId}], Quiet: false}')"
Alternatively without jq:
$ aws s3api delete-objects \
--bucket ${bucket_name} \
--delete "$(aws s3api list-object-versions \
--bucket "${bucket_name}" \
--output=json \
--query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"
This two bash lines are enough for me to enable the bucket deletion !
1: Delete objects
aws s3api delete-objects --bucket ${buckettoempty} --delete "$(aws s3api list-object-versions --bucket ${buckettoempty} --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"
2: Delete markers
aws s3api delete-objects --bucket ${buckettoempty} --delete "$(aws s3api list-object-versions --bucket ${buckettoempty} --query='{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}}')"
Looks like as of now, there is an Empty button in the AWS S3 console.
Just select your bucket and click on it. It will ask you to confirm your decision by typing permanently delete
Note, this will not delete the bucket itself.
Here is a one liner you can just cut and paste into the command line to delete all versions and delete markers (it requires aws tools, replace yourbucket-name-backup with your bucket name)
echo '#!/bin/bash' > deleteBucketScript.sh \
&& aws --output text s3api list-object-versions --bucket $BUCKET_TO_PERGE \
| grep -E "^VERSIONS" |\
awk '{print "aws s3api delete-object --bucket $BUCKET_TO_PERGE --key "$4" --version-id "$8";"}' >> \
deleteBucketScript.sh && . deleteBucketScript.sh; rm -f deleteBucketScript.sh; echo '#!/bin/bash' > \
deleteBucketScript.sh && aws --output text s3api list-object-versions --bucket $BUCKET_TO_PERGE \
| grep -E "^DELETEMARKERS" | grep -v "null" \
| awk '{print "aws s3api delete-object --bucket $BUCKET_TO_PERGE --key "$3" --version-id "$5";"}' >> \
deleteBucketScript.sh && . deleteBucketScript.sh; rm -f deleteBucketScript.sh;
then you could use:
aws s3 rb s3://bucket-name --force
If you have to delete/empty large S3 buckets, it becomes quite inefficient (and expensive) to delete every single object and version. It's often more convenient to let AWS expire all objects and versions.
aws s3api put-bucket-lifecycle-configuration \
--lifecycle-configuration '{"Rules":[{
"ID":"empty-bucket",
"Status":"Enabled",
"Prefix":"",
"Expiration":{"Days":1},
"NoncurrentVersionExpiration":{"NoncurrentDays":1}
}]}' \
--bucket YOUR-BUCKET
Then you just have to wait 1 day and the bucket can be deleted with:
aws s3api delete-bucket --bucket YOUR-BUCKET
For those using multiple profiles via ~/.aws/config
import boto3
PROFILE = "my_profile"
BUCKET = "my_bucket"
session = boto3.Session(profile_name = PROFILE)
s3 = session.resource('s3')
bucket = s3.Bucket(BUCKET)
bucket.object_versions.delete()
One way to do it is iterate through the versions and delete them. A bit tricky on the CLI, but as you mentioned Java, that would be more straightforward:
AmazonS3Client s3 = new AmazonS3Client();
String bucketName = "deleteversions-"+UUID.randomUUID();
//Creates Bucket
s3.createBucket(bucketName);
//Enable Versioning
BucketVersioningConfiguration configuration = new BucketVersioningConfiguration(ENABLED);
s3.setBucketVersioningConfiguration(new SetBucketVersioningConfigurationRequest(bucketName, configuration ));
//Puts versions
s3.putObject(bucketName, "some-key",new ByteArrayInputStream("some-bytes".getBytes()), null);
s3.putObject(bucketName, "some-key",new ByteArrayInputStream("other-bytes".getBytes()), null);
//Removes all versions
for ( S3VersionSummary version : S3Versions.inBucket(s3, bucketName) ) {
String key = version.getKey();
String versionId = version.getVersionId();
s3.deleteVersion(bucketName, key, versionId);
}
//Removes the bucket
s3.deleteBucket(bucketName);
System.out.println("Done!");
You can also batch delete calls for efficiency if needed.
If you want pure CLI approach (with jq):
aws s3api list-object-versions \
--bucket $bucket \
--region $region \
--query "Versions[].Key" \
--output json | jq 'unique' | jq -r '.[]' | while read key; do
echo "deleting versions of $key"
aws s3api list-object-versions \
--bucket $bucket \
--region $region \
--prefix $key \
--query "Versions[].VersionId" \
--output json | jq 'unique' | jq -r '.[]' | while read version; do
echo "deleting $version"
aws s3api delete-object \
--bucket $bucket \
--key $key \
--version-id $version \
--region $region
done
done
Simple bash loop I've found and implemented for N buckets:
for b in $(ListOfBuckets); do \
echo "Emptying $b"; \
aws s3api delete-objects --bucket $b --delete "$(aws s3api list-object-versions --bucket $b --output=json --query='{Objects: *[].{Key:Key,VersionId:VersionId}}')"; \
done
I ran into issues with Abe's solution as the list_buckets generator is used to create a massive list called all_keys and I spent an hour without it ever completing. This tweak seems to work better for me, I had close to a million objects in my bucket and counting!
import boto
s3 = boto.connect_s3()
bucket = s3.get_bucket("your-bucket-name-here")
chunk_counter = 0 #this is simply a nice to have
keys = []
for key in bucket.list_versions():
keys.append(key)
if len(keys) > 1000:
bucket.delete_keys(keys)
chunk_counter += 1
keys = []
print("Another 1000 done.... {n} chunks so far".format(n=chunk_counter))
#bucket.delete() #as per usual uncomment if you're sure!
Hopefully this helps anyone else encountering this S3 nightmare!
For deleting specify object(s), using jq filter.
You may need cleanup the 'DeleteMarkers' not just 'Versions'.
Using $() instead of ``, you may embed variables for bucket-name and key-value.
aws s3api delete-objects --bucket bucket-name --delete "$(aws s3api list-object-versions --bucket bucket-name | jq -M '{Objects: [.["Versions","DeleteMarkers"][]|select(.Key == "key-value")| {Key:.Key, VersionId : .VersionId}], Quiet: false}')"
Even though technically it's not AWS CLI, I'd recommend using AWS Tools for Powershell for this task. Then you can use the simple command as below:
Remove-S3Bucket -BucketName {bucket-name} -DeleteBucketContent -Force -Region {region}
As stated in the documentation, DeleteBucketContent flag does the following:
"If set, all remaining objects and/or object versions in the bucket
are deleted proir (sic) to the bucket itself being deleted"
Reference: https://docs.aws.amazon.com/powershell/latest/reference/items/Remove-S3Bucket.html
This bash script found here: https://gist.github.com/weavenet/f40b09847ac17dd99d16
worked as is for me.
I saved script as: delete_all_versions.sh and then simply ran:
./delete_all_versions.sh my_foobar_bucket
and that worked without a flaw.
Did not need python or boto or anything.
You can do this from the AWS Console using Lifecycle Rules.
Open the bucket in question. Click the Management tab at the top.
Make sure the Lifecycle Sub Tab is selected.
Click + Add lifecycle rule
On Step 1 (Name and scope) enter a rule name (e.g. removeall)
Click Next to Step 2 (Transitions)
Leave this as is and click Next.
You are now on the 3. Expiration step.
Check the checkboxes for both Current Version and Previous Versions.
Click the checkbox for "Expire current version of object" and enter the number 1 for "After _____ days from object creation
Click the checkbox for "Permanently delete previous versions" and enter the number 1 for
"After _____ days from becoming a previous version"
click the checkbox for "Clean up incomplete multipart uploads"
and enter the number 1 for "After ____ days from start of upload"
Click Next
Review what you just did.
Click Save
Come back in a day and see how it is doing.
I improved the boto3 answer with Python3 and argv.
Save the following script as something like s3_rm.py.
#!/usr/bin/env python3
import sys
import boto3
def main():
args = sys.argv[1:]
if (len(args) < 1):
print("Usage: {} s3_bucket_name".format(sys.argv[0]))
exit()
s3 = boto3.resource('s3')
bucket = s3.Bucket(args[0])
bucket.object_versions.delete()
# if you want to delete the now-empty bucket as well, uncomment this line:
#bucket.delete()
if __name__ == "__main__":
main()
Add chmod +x s3_rm.py.
Run the function like ./s3_rm.py my_bucket_name.
In the same vein as https://stackoverflow.com/a/63613510/805031 ... this is what I use to clean up accounts before closing them:
# If the data is too large, apply LCP to remove all objects within a day
# Create lifecycle-expire.json with the LCP required to purge all objects
# Based on instructions from: https://aws.amazon.com/premiumsupport/knowledge-center/s3-empty-bucket-lifecycle-rule/
cat << JSON > lifecycle-expire.json
{
"Rules": [
{
"ID": "remove-all-objects-asap",
"Filter": {
"Prefix": ""
},
"Status": "Enabled",
"Expiration": {
"Days": 1
},
"NoncurrentVersionExpiration": {
"NoncurrentDays": 1
},
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 1
}
},
{
"ID": "remove-expired-delete-markers",
"Filter": {
"Prefix": ""
},
"Status": "Enabled",
"Expiration": {
"ExpiredObjectDeleteMarker": true
}
}
]
}
JSON
# Apply to ALL buckets
aws s3 ls | cut -d" " -f 3 | xargs -I{} aws s3api put-bucket-lifecycle-configuration --bucket {} --lifecycle-configuration file://lifecycle-expire.json
# Apply to a single bucket; replace $BUCKET_NAME
aws s3api put-bucket-lifecycle-configuration --bucket $BUCKET_NAME --lifecycle-configuration file://lifecycle-expire.json
...then a day later you can come back and delete the buckets using something like:
# To force empty/delete all buckets
aws s3 ls | cut -d" " -f 3 | xargs -I{} aws s3 rb s3://{} --force
# To remove only empty buckets
aws s3 ls | cut -d" " -f 3 | xargs -I{} aws s3 rb s3://{}
# To force empty/delete a single bucket; replace $BUCKET_NAME
aws s3 rb s3://$BUCKET_NAME --force
It saves a lot of time and money so worth doing when you have many TBs to delete.
I found the other answers either incomplete or requiring external dependencies to be installed (like boto), so here is one that is inspired by those but goes a little deeper.
As documented in Working with Delete Markers, before a versioned bucket can be removed, all its versions must be completely deleted, which is a 2-step process:
"delete" all version objects in the bucket, which marks them as
deleted but does not actually delete them
complete the deletion by deleting all the deletion marker objects
Here is the pure CLI solution that worked for me (inspired by the other answers):
#!/usr/bin/env bash
bucket_name=...
del_s3_bucket_obj()
{
local bucket_name=$1
local obj_type=$2
local query="{Objects: $obj_type[].{Key:Key,VersionId:VersionId}}"
local s3_objects=$(aws s3api list-object-versions --bucket ${bucket_name} --output=json --query="$query")
if ! (echo $s3_objects | grep -q '"Objects": null'); then
aws s3api delete-objects --bucket "${bucket_name}" --delete "$s3_objects"
fi
}
del_s3_bucket_obj ${bucket_name} 'Versions'
del_s3_bucket_obj ${bucket_name} 'DeleteMarkers'
Once this is done, the following will work:
aws s3 rb "s3://${bucket_name}"
Not sure how it will fare with 1000+ objects though, if anyone can report that would be awesome.
By far the easiest method I've found is to use this CLI tool, s3wipe. It's provided as a docker container so you can use it like so:
$ docker run -it --rm slmingol/s3wipe --help
usage: s3wipe [-h] --path PATH [--id ID] [--key KEY] [--dryrun] [--quiet]
[--batchsize BATCHSIZE] [--maxqueue MAXQUEUE]
[--maxthreads MAXTHREADS] [--delbucket] [--region REGION]
Recursively delete all keys in an S3 path
optional arguments:
-h, --help show this help message and exit
--path PATH S3 path to delete (e.g. s3://bucket/path)
--id ID Your AWS access key ID
--key KEY Your AWS secret access key
--dryrun Don't delete. Print what we would have deleted
--quiet Suprress all non-error output
--batchsize BATCHSIZE # of keys to batch delete (default 100)
--maxqueue MAXQUEUE Max size of deletion queue (default 10k)
--maxthreads MAXTHREADS Max number of threads (default 100)
--delbucket If S3 path is a bucket path, delete the bucket also
--region REGION Region of target S3 bucket. Default vaue `us-
east-1`
Example
Here's an example where I'm deleting all the versioned objects in a bucket and then deleting the bucket:
$ docker run -it --rm slmingol/s3wipe \
--id $(aws configure get default.aws_access_key_id) \
--key $(aws configure get default.aws_secret_access_key) \
--path s3://bw-tf-backends-aws-example-logs \
--delbucket
[2019-02-20#03:39:16] INFO: Deleting from bucket: bw-tf-backends-aws-example-logs, path: None
[2019-02-20#03:39:16] INFO: Getting subdirs to feed to list threads
[2019-02-20#03:39:18] INFO: Done deleting keys
[2019-02-20#03:39:18] INFO: Bucket is empty. Attempting to remove bucket
How it works
There's a bit to unpack here but the above is doing the following:
docker run -it --rm mikelorant/s3wipe - runs s3wipe container interactively and deletes it after each execution
--id & --key - passing our access key and access id in
aws configure get default.aws_access_key_id - retrieves our key id
aws configure get default.aws_secret_access_key - retrieves our key secret
--path s3://bw-tf-backends-aws-example-logs - bucket that we want to delete
--delbucket - deletes bucket once emptied
References
https://github.com/slmingol/s3wipe
Is there a way to export an AWS CLI Profile to Environment Variables?
https://cloud.docker.com/u/slmingol/repository/docker/slmingol/s3wipe
https://gist.github.com/wknapik/191619bfa650b8572115cd07197f3baf
#!/usr/bin/env bash
set -eEo pipefail
shopt -s inherit_errexit >/dev/null 2>&1 || true
if [[ ! "$#" -eq 2 || "$1" != --bucket ]]; then
echo -e "USAGE: $(basename "$0") --bucket <bucket>"
exit 2
fi
# $# := bucket_name
empty_bucket() {
local -r bucket="${1:?}"
for object_type in Versions DeleteMarkers; do
local opt=() next_token=""
while [[ "$next_token" != null ]]; do
page="$(aws s3api list-object-versions --bucket "$bucket" --output json --max-items 1000 "${opt[#]}" \
--query="[{Objects: ${object_type}[].{Key:Key, VersionId:VersionId}}, NextToken]")"
objects="$(jq -r '.[0]' <<<"$page")"
next_token="$(jq -r '.[1]' <<<"$page")"
case "$(jq -r .Objects <<<"$objects")" in
'[]'|null) break;;
*) opt=(--starting-token "$next_token")
aws s3api delete-objects --bucket "$bucket" --delete "$objects";;
esac
done
done
}
empty_bucket "${2#s3://}"
E.g. empty_bucket.sh --bucket foo
This will delete all object versions and delete markers in a bucket in batches of 1000. Afterwards, the bucket can be deleted with aws s3 rb s3://foo.
Requires bash, awscli and jq.
This works for me. Maybe running later versions of something and above > 1000 items. been running a couple of million files now. However its still not finished after half a day and no means to validate in AWS GUI =/
# Set bucket name to clearout
BUCKET = 'bucket-to-clear'
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket(BUCKET)
max_len = 1000 # max 1000 items at one req
chunk_counter = 0 # just to keep track
keys = [] # collect to delete
# clear files
def clearout():
global bucket
global chunk_counter
global keys
result = bucket.delete_objects(Delete=dict(Objects=keys))
if result["ResponseMetadata"]["HTTPStatusCode"] != 200:
print("Issue with response")
print(result)
chunk_counter += 1
keys = []
print(". {n} chunks so far".format(n=chunk_counter))
return
# start
for key in bucket.object_versions.all():
item = {'Key': key.object_key, 'VersionId': key.id}
keys.append(item)
if len(keys) >= max_len:
clearout()
# make sure last files are cleared as well
if len(keys) > 0:
clearout()
print("")
print("Done, {n} items deleted".format(n=chunk_counter*max_len))
#bucket.delete() #as per usual uncomment if you're sure!
To add to python solutions provided here: if you are getting boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request error, try creating ~/.boto file with the following data:
[Credentials]
aws_access_key_id = aws_access_key_id
aws_secret_access_key = aws_secret_access_key
[s3]
host=s3.eu-central-1.amazonaws.com
aws_access_key_id = aws_access_key_id
aws_secret_access_key = aws_secret_access_key
Helped me to delete bucket in Frankfurt region.
Original answer: https://stackoverflow.com/a/41200567/2586441
If you use AWS SDK for JavaScript S3 Client for Node.js (#aws-sdk/client-s3), you can use following code:
const { S3Client, ListObjectsCommand } = require('#aws-sdk/client-s3')
const endpoint = 'YOUR_END_POINT'
const region = 'YOUR_REGION'
// Create an Amazon S3 service client object.
const s3Client = new S3Client({ region, endpoint })
const deleteEverythingInBucket = async bucketName => {
console.log('Deleting all object in the bucket')
const bucketParams = {
Bucket: bucketName
}
try {
const command = new ListObjectsCommand(bucketParams)
const data = await s3Client.send(command)
console.log('Bucket Data', JSON.stringify(data))
if (data?.Contents?.length > 0) {
console.log('Removing objects in the bucket', data.Contents.length)
for (const object of data.Contents) {
console.log('Removing object', object)
if (object.Key) {
try {
await deleteFromS3({
Bucket: bucketName,
Key: object.Key
})
} catch (err) {
console.log('Error on object delete', err)
}
}
}
}
} catch (err) {
console.log('Error creating presigned URL', err)
}
}
For my case, I wanted to be sure that all objects for specific prefixes would be deleted. So, we generate a list of all objects for each prefix, divide it by 1k records (AWS limitation), and delete them.
Please note that AWS CLI and jq must be installed and configured.
A text file with prefixes that we want to delete was created (in the example below prefixes.txt).
The format is:
prefix1
prefix2
And this is a shell script (also please change the BUCKET_NAME with the real name):
#!/bin/sh
BUCKET="BUCKET_NAME"
PREFIXES_FILE="prefixes.txt"
if [ -f "$PREFIXES_FILE" ]; then
while read -r current_prefix
do
printf '***** PREFIX %s *****\n' "$current_prefix"
OLD_OBJECTS_FILE="$current_prefix-all.json"
if [ -f "$OLD_OBJECTS_FILE" ]; then
printf 'Deleted %s...\n' "$OLD_OBJECTS_FILE"
rm "$OLD_OBJECTS_FILE"
fi
cmd="aws s3api list-object-versions --bucket \"$BUCKET\" --prefix \"$current_prefix/\" --query \"[Versions,DeleteMarkers][].{Key: Key, VersionId: VersionId}\" >> $OLD_OBJECTS_FILE"
echo "$cmd"
eval "$cmd"
no_of_obj=$(cat "$OLD_OBJECTS_FILE" | jq 'length')
i=0
page=0
#Get old version Objects
echo "Objects versions count: $no_of_obj"
while [ $i -lt "$no_of_obj" ]
do
next=$((i+999))
old_versions=$(cat "$OLD_OBJECTS_FILE" | jq '.[] | {Key,VersionId}' | jq -s '.' | jq .[$i:$next])
paged_file_name="$current_prefix-page-$page.json"
cat << EOF > "$paged_file_name"
{"Objects":$old_versions, "Quiet":true}
EOF
echo "Deleting records from $i - $next"
cmd="aws s3api delete-objects --bucket \"$BUCKET\" --delete file://$paged_file_name"
echo "$cmd"
eval "$cmd"
i=$((i+1000))
page=$((page+1))
done
done < "$PREFIXES_FILE"
else
echo "$PREFIXES_FILE does not exist."
fi
If you want just to check the list of objects and don't delete them immediately - please comment/remove the last eval "$cmd".
I needed to delete older object versions but keep the current version in the bucket. Code uses iterators, works on buckets of any size with any number of objects.
import boto3
from itertools import islice
bucket = boto3.resource('s3').Bucket('bucket_name'))
all_versions = bucket.object_versions.all()
stale_versions = iter(filter(lambda x: not x.is_latest, all_versions))
pages = iter(lambda: tuple(islice(stale_versions, 1000)), ())
for page in pages:
bucket.delete_objects(
Delete={
'Objects': [{
'Key': item.key,
'VersionId': item.version_id
} for item in page]
})
S3=s3://tmobi-processed/biz.db/
aws s3 rm ${S3} --recursive
BUCKET=`echo ${S3} | egrep -o 's3://[^/]*' | sed -e s/s3:\\\\/\\\\///g`
PREFIX=`echo ${S3} | sed -e s/s3:\\\\/\\\\/${BUCKET}\\\\///g`
aws s3api list-object-versions \
--bucket ${BUCKET} \
--prefix ${PREFIX} |
jq -r '.Versions[] | .Key + " " + .VersionId' |
while read key id ; do
aws s3api delete-object \
--bucket ${BUCKET} \
--key ${key} \
--version-id ${id} >> versions.txt
done
aws s3api list-object-versions \
--bucket ${BUCKET} \
--prefix ${PREFIX} |
jq -r '.DeleteMarkers[] | .Key + " " + .VersionId' |
while read key id ; do
aws s3api delete-object \
--bucket ${BUCKET} \
--key ${key} \
--version-id ${id} >> delete_markers.txt
done
You can use aws-cli to delete s3 bucket
aws s3 rb s3://your-bucket-name
If aws cli is not installed in your computer you can your following commands:
For Linux or ubuntu:
sudo apt-get install aws-cli
Then check it is installed or not by:
aws --version
Now configure it by providing aws-access-credentials
aws configure
Then give the access key and secret access key and your region