I'm trying to test a firehose delivery-stream to s3 creation with custom prefix, using localstack
https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html
but it seems that the format string I send as Prefix parameter is not evaluated, but rather taken as is and placed as prefix.
This is the aws cli command I'm using to create the delivery-stream
aws --endpoint-url $EP_URL firehose create-delivery-stream \
--delivery-stream-name $DELIVERY_STREAM_NAME \
--region $TEST_REGION \
--s3-destination-configuration "RoleARN=arn:aws:iam::123456789012:role/Test-Role,BucketARN=$DEST_BUCKET, \
BufferingHints={SizeInMBs=1,IntervalInSeconds=60},\
Prefix=\"myprefix/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/\""
and this is the resulting object in s3 after writing to the stream:
aws --endpoint-url $EP_URL s3 ls --recursive $DEST_BUCKET | tail -n 20
2022-11-23 17:11:10 5 myprefix/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/2022/11/23/15/mystream-2022-11-23-15-11-10-dd405ee0-0f74-4a16-9e24-df936935b782
Any ideas anyone? Thank you in advance!
Related
Is there an easy way using the AWS CLI to delete all size 0 objects under a prefix?
For example if our s3 prefix looks like this:
$ aws s3 ls --recursive s3://bucket/prefix
2022-04-20 10:39:51 0 empty_file
2022-04-20 10:39:52 21 top_level_file
2022-04-14 15:01:34 0 folder_a/nested_empty_file
2022-04-23 03:35:02 42 folder_a/dont_delete_me
I would like an aws cli command line invocation to just delete empty_file and folder_a/nested_empty_file.
I know this could be done via boto or any other number of s3 api implementations, but it feels like I should be able to do this in a one-liner from the command line given how simple it is.
Using the aws s3api subcommand and jq for json wrangling we can do the following:
aws s3api delete-objects --bucket bucket --delete "$(aws s3api list-objects-v2 --bucket bucket --prefix prefix --query 'Contents[?Size==`0`]' | jq -c -r '{ "Objects": [.[] | {"Key": .Key}] }')"
aws s3api does not yet support reading from stdin (see GH PR here: https://github.com/aws/aws-cli/pull/3209) so we need to pass the list of objects via sub-shell expansion, which is unfortunately a little awkward but still meets my requirements of being a one-liner.
This question already has an answer here:
Delete Full S3 Bucket CLI
(1 answer)
Closed 1 year ago.
I'm trying to empty an S3 bucket using CLI.
I tried aws s3 rm --recursive command which doesn't empty my bucket as it has versioned enabled.
I tried aws s3 rb --force command to forcefully delete the bucket, which doesn't work as well. It throws this error BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
I really need to get this done using CLI. Is there a way to do it. Please help. The end goal is to delete the bucket. Thanks in advance.
If you can only use the CLI try this:
aws s3api delete-objects \
--bucket ${bucket_name} \
--delete "$(aws s3api list-object-versions \
--bucket "${bucket_name}" \
--output=json \
--query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"
I am using a command using aws cli in my windows machine to get latest file from s3 bucket .
aws s3 ls s3://Bucket-name --recursive | sort |tail -n 1
It is listing all the files in sorted manner according to date upto here:
aws s3 ls s3://Bucket-name --recursive | sort
But writing the full command throws error:
'Tail is not recognized as an internal or external command'.
Is there some other alternative for tail or for the full command.
The AWS CLI permits JMESPath expressions in the --query parameter.
This command shows the most recently-updated object:
aws s3api list-objects --bucket my-bucket --query 'sort_by(Contents, &LastModified)[-1].Key' --output text
It's basically saying:
Sort by LastModified
Obtain the last [-1] entry
Show the Key (filename)
I am trying to set redrive policy for SQS using the AWS CLI Command below , but seeing an error related to redrive JSON. Can you please let me know how I can fix this?
redrive_policy="{\"RedrivePolicy\":{\"deadLetterTargetArn\":\"$dlq_arn\",\"maxReceiveCount\":\"15\"}}"
AWS CLI COMMAND
aws sqs set-queue-attributes --queue-url https://queue.amazonaws.com/12345678/test-queue --attributes $redrive_policy --region=us-east-1
Error Message
Parameter validation failed: Invalid type for parameter
Attributes.RedrivePolicy, value: OrderedDict([(u'deadLetterTargetArn',
u'arn:aws:sqs:us-east-1:12345678:dlq'), (u'maxReceiveCount', u'15')]),
type: , valid types:
Have you tried just creating the JSON in a separate file and passing it as an argument to your AWS CLI command? I find it's difficult to get all of the escaping correct when passing the JSON as a parameter. So you'd basically do it as the example shows in the AWS documentation:
https://docs.aws.amazon.com/cli/latest/reference/sqs/set-queue-attributes.html#examples
So first you'd create a new file called "set-queue-attributes.json" like so:
{
"DelaySeconds": "10",
"MaximumMessageSize": "131072",
"MessageRetentionPeriod": "259200",
"ReceiveMessageWaitTimeSeconds": "20",
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyDeadLetterQueue\",\"maxReceiveCount\":\"1000\"}",
"VisibilityTimeout": "60"
}
Then run the command like this:
aws sqs set-queue-attributes --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewQueue --attributes file://set-queue-attributes.json --region=us-east-1
if you want to run in the same command you can use this example:
aws sqs set-queue-attributes \
--queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewQueue \
--attributes '{
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyDeadLetterQueue\",\"maxReceiveCount\":\"1000\"}",
"MessageRetentionPeriod": "259200",
"VisibilityTimeout": "90"
}'
Three Methods to achieve this:
Note: The solutions also work on any other AWS CLI commands that require a stringified JSON
1. Using the Command-line JSON processor jq (Recommended)
This method is recommended because of many reasons:
I've found jq a handy tool to use when working with AWS CLI as the need to stringify JSON comes up quite frequently.
Install for Ubuntu: sudo apt install jq
Basic Options:
jq -R: Returns the stringified JSON
jq -c: Eliminates spacing and newline characters
The benefit is that you can write JSON as JSON and Pipe the result into the jq -R command.
Method 1:
aws sqs set-queue-attributes \
--queue-url "https://sqs.ap-south-1.amazonaws.com/IAMEXAMPLE12345678/ExampleQueue" \
--attributes RedrivePolicy=$(echo '{"maxReceiveCount":500,"deadLetterTargetArn":"arn:aws:sqs:ap-south-1:IAMEXAMPLE12345678:ExampleDeadLetterQueue"}' | jq -R)
OR if you have a sqs-redrive-policy.json file:
Method 2:
In sqs-redrive-policy.json,
{
"maxReceiveCount": 500,
"deadLetterTargetArn": "arn:aws:sqs:ap-south-1:IAMEXAMPLE12345678:ExampleDeadLetterQueue"
}
Run in Command Line:
aws sqs set-queue-attributes \
--queue-url "https://sqs.ap-south-1.amazonaws.com/IAMEXAMPLE12345678/ExampleQueue" \
--attributes RedrivePolicy=$(cat ~/path/to/file/sqs-redrive-policy.json | jq -c | jq -R)
As you can see the second benefit is that you can isolately modify only the --redrive-policy without having to touch any of the other attributes.
Common Confusion: A confusion is the name set-queue-attributes (it would be better named put-queue-attributes). as it doesn't overwrite all attributes but only overwrites the attributes mentioned with the command. So if you already set a Policy attribute earlier during create-queue, this will not overwrite the Policy to null. In other words, this is safe to use.
2. Using a stringified JSON
This is a pain to be honest, and I avoid this.
aws sqs set-queue-attributes \
--queue-url "https://sqs.us-east-1.amazonaws.com/IAMEXAMPLE12345678/ExampleQueue" \
--attributes '{
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:ap-south-1:IAMEXAMPLE12345678:ExampleDeadLetterQueue\",\"maxReceiveCount\":\"500\"}",
}'
3. Use a filePathURL to the JSON file for attributes.json NOT sqs-redrive-policy.json
This is my last preference.
Reason:
This means setting all the attributes specified in the attributes.json file again at a single go.
Doesn't escape the pain of writing stringified JSON as text.
In attributes.json,
{
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:ap-south-1:IAMEXAMPLE12345678:ExampleDeadLetterQueue\", \"maxReceiveCount\":\"5\"}"
}
Run in command line:
aws sqs set-queue-attributes \
--queue-url "https://sqs.ap-south-1.amazonaws.com/IAMEXAMPLE12345678/ExampleQueue" \
--attributes file:///home/yourusername/path/to/file/attributes.json
I've got several objects stored in Amazon S3 whose content-type I need to change from text/html to application/rss+xml. I gather that it should be possible to do this with a copy command, specifying the same path for the source and destination. I'm trying to do this using the AWS cli tools, but I'm getting this error:
$ aws s3 cp s3://mybucket/feed/ogg/index.html \
s3://mybucket/feed/ogg/index.html \
--content-type 'application/rss+xml'
copy failed: s3://mybucket/feed/ogg/index.html
to s3://mybucket/feed/ogg/index.html
A client error (InvalidRequest) occurred when calling the
CopyObject operation: This copy request is illegal because it is
trying to copy an object to itself without changing the object's
metadata, storage class, website redirect location or encryption
attributes.
If I specify a different path for source and destination, I don't get the error:
$ aws s3 cp s3://mybucket/feed/ogg/index.html \
s3://mybucket/feed/ogg/index2.html \
--content-type 'application/rss+xml'
copy: s3://mybucket/feed/ogg/index.html
to s3://mybucket/feed/ogg/index2.html
Even though the command completes successfully, the index2.html object is created with the text/html content type, not the application/rss+xml type that I specified.
How can I modify this command-line to make it work?
It's possible to use the low level s3api to make this change:
$ aws s3api copy-object --bucket archive --content-type "application/rss+xml" \
--copy-source archive/test/test.html --key test/test.html \
--metadata-directive "REPLACE"
http://docs.aws.amazon.com/cli/latest/reference/s3api/copy-object.html
The problem was just not being able to specify the --metadata-directive. Thanks for pointing out the open issue / feature request, nelstrom!
You can also do it with the higher level API, by copying a file over itself but marking it as a change in metadata:
aws s3 cp \
--content-type "application/rss+xml" \
--metadata-directive REPLACE \
s3://mybucket/myfile \
s3://mybucket/myfile
You can override the content-type of your file, with the aws s3 cp command, using the --metadata-directive optional attribute to specify the content-type is replaced with the one provided with --content-type 'application/rss+xml' during the copy:
aws s3 cp \
--content-type 'application/rss+xml' \
--metadata-directive REPLACE \
s3://mybucket/feed/ogg/index.html \
s3://mybucket/feed/ogg/index.html
More information: https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
Then, you can verify it by checking the file metadata:
aws s3api head-object \
--bucket mybucket \
--key feed/ogg/index.html