aws: error: argument operation: put-bucket-lifecycle-configuration - amazon-web-services

Getting following error for put-bucket-lifecycle-configuration :
[root#ADM-PROD-OMNI noc-scripts]# aws s3api put-bucket-lifecycle-configuration --bucket noc-try --lifecycle-configuration lifecycle.json
usage: aws [options] <command> <subcommand> [parameters]
aws: error: argument operation: Invalid choice, valid choices are:
abort-multipart-upload | complete-multipart-upload
copy-object | create-bucket
create-multipart-upload | delete-bucket
delete-bucket-cors | delete-bucket-lifecycle
delete-bucket-policy | delete-bucket-replication
delete-bucket-tagging | delete-bucket-website
delete-object | delete-objects
get-bucket-acl | get-bucket-cors
get-bucket-lifecycle | get-bucket-location
get-bucket-logging | get-bucket-notification
get-bucket-notification-configuration | get-bucket-policy
get-bucket-replication | get-bucket-request-payment
get-bucket-tagging | get-bucket-versioning
get-bucket-website | get-object
get-object-acl | get-object-torrent
head-bucket | head-object
list-buckets | list-multipart-uploads
list-object-versions | list-objects
list-parts | put-bucket-acl
put-bucket-cors | put-bucket-lifecycle
put-bucket-logging | put-bucket-notification
put-bucket-notification-configuration | put-bucket-policy
put-bucket-replication | put-bucket-request-payment
put-bucket-tagging | put-bucket-versioning
put-bucket-website | put-object
put-object-acl | restore-object
upload-part | upload-part-copy
wait | help
But
get-bucket-lifecycle is working this means my aws is configured :
[root#ADM-PROD-OMNI noc-scripts]# aws s3api get-bucket-lifecycle --bucket 4sm-wrapup
RULES clear multipart failed files Enabled
**OR**
[root#ADM-PROD-OMNI noc-scripts]# aws s3api get-bucket-lifecycle --bucket noc-try
A client error (NoSuchLifecycleConfiguration) occurred when calling the GetBucketLifecycle operation: The lifecycle configuration does not exist
Also tried :
[root#ADM-PROD-OMNI noc-scripts]# aws s3api put-bucket-lifecycle --bucket noc-try --lifecycle-configuration lifecycle.json
Error parsing parameter '--lifecycle-configuration': Expected: '=', received: '.' for input:
lifecycle.json
^
Please let me know what is wrong here ?

In your first example it is pretty obvious that the method "put-bucket-lifecycle-configuration" does not exist, and you need to be using "put-bucket-lifecycle" instead, which you said you also tried and received a different error.
Different errors are good!
The new error suggests improper syntax when calling your .json configuration file, and/or improper structured JSON.
Here is documentation on the "put-bucket-lifecycle": put-bucket-lifecycle
Here is an example of calling a .json config file:
aws s3api put-bucket-lifecycle --bucket my-bucket --lifecycle-configuration file://lifecycle.json
Here is an example of JSON file:
{
"Rules": [
{
"ID": "Move to Glacier after sixty days (objects in logs/2015/)",
"Prefix": "logs/2015/",
"Status": "Enabled",
"Transition": {
"Days": 60,
"StorageClass": "GLACIER"
}
},
{
"Expiration": {
"Date": "2016-01-01T00:00:00.000Z"
},
"ID": "Delete 2014 logs in 2016.",
"Prefix": "logs/2014/",
"Status": "Enabled"
}
]
}
BELOW JSON FILE IS TESTED AND WORKS AS SHOWN IN THE SUBSEQUENT SCREENSHOTS:
{
"Rules": [
{
"ID": "multipart-upload-rule",
"Prefix": "noc-try",
"Status": "Enabled",
"AbortIncompleteMultipartUpload": { "DaysAfterInitiation": 3 }
}
]
}
CLI COMMAND TO CREATE LIFECYCLE CONFIG USING ABOVE JSON FILE:
aws s3api put-bucket-lifecycle --bucket testbucket1478921 --lifecycle-configuration file://c:/tmp/test.json

Related

How to filter with value only regardless the name in AWS CLI

My EC2 instance has many tags with a desired value EBM. The thing that this value could be in a different Name, sometimes under tag:Name and sometimes tag:XXX, I tried the below query and it didn't work:
aws --region sa-east-1 ec2 describe-security-groups --filters 'Name=*,Values=*EBM*'
An error occurred (InvalidParameterValue) when calling the DescribeSecurityGroups operation: The filter '*' is invalid
any idea how to make the Name as wild card just match the value?
I tried this and it didn't work:
aws --region sa-east-1 ec2 describe-security-groups --filters 'Name=*,Values=*EBM*'
An error occurred (InvalidParameterValue) when calling the DescribeSecurityGroups operation: The filter '*' is invalid
I also tried this and didn't work:
aws --region sa-east-1 ec2 describe-security-groups --filters 'Name=*.*,Values=*EBM*'
Latest Edit:
I have tested that in my lab environment and its possible to get what you are looking for combining --filter & -query altogether!
Alternatively, you can use the contains function with the Tag[] in the query to match tag values that contain SecGrp anywhere in the value, Keep in mind that the --query option uses the JMESPath query language, which has its own syntax and rules. You can find more information on using JMESPath queries in the AWS CLI documentation.
Further, to get the tag values for security groups in a hash form (i.e., a dictionary or associative array), you can combine that with your --query to get a nicer readable format, below is how you can get that ...
$ aws ec2 describe-security-groups --filters Name=tag-key,Values="*" --query 'SecurityGroups[*].Tags[?contains(Value, `EBM`)][].{Key: Key, Value: Value}' --profile dev
[
{
"Key": "mylabtest",
"Value": "EBMSecGrpec2"
},
{
"Key": "mylabtest",
"Value": "EBMSecGrpfsxontap"
},
{
"Key": "mylabtest",
"Value": "EBMSecGrpfsxlustre"
},
{
"Key": "mylabtest",
"Value": "EBMSecGrpPostConfigAWSCodeBuild"
}
]
You can get into into tablular form as well..
aws ec2 describe-security-groups --filters Name=tag-key,Values="*" --query 'SecurityGroups[*].Tags[?contains(Value, `EBM`)][].{Key: Key, Value: Value}' --profile dev --output table
Use describe-tags
To --filter with a value only regardless of the name in the AWS CLI, you simply can use "Name=value,Values=*tg*".
Keep Name=value so as to look at the value fields only.
use Value=*EBM* it will fetch all values having EBM regardless of prefix or suffix.
However, You can combine --filters with the --query option to filter the output and only display specific fields. For example, to only display the tag names and values, you can use the following command:
$ aws ec2 describe-tags --filters "Name=value,Values=*tg*" --query 'Tags[*].{Key_Name: Key, VauleOfKey: Value}' --profile dev
[
{
"Key_Name": "SSM_Managed",
"VauleOfKey": "Stg"
},
{
"Key_Name": "SSM_Managed",
"VauleOfKey": "Stg"
},
{
"Key_Name": "SSM_Managed",
"VauleOfKey": "Stg"
},
{
"Key_Name": "SSM_Managed",
"VauleOfKey": "Stg"
},
{
"Key_Name": "SSM_Managed",
"VauleOfKey": "Stg"
},
{
"Key_Name": "SSM_Managed",
"VauleOfKey": "Stg"
},
{
"Key_Name": "SSM_Managed",
"VauleOfKey": "Stg"
},
{
"Key_Name": "SSM_Managed",
"VauleOfKey": "Stg"
},
{
"Key_Name": "SSM_Managed",
"VauleOfKey": "Stg"
}
]
OR
You can get it as a table to be more readable ..
$ aws ec2 describe-tags --filters "Name=value,Values=*tg*" --query 'Tags[*].{Key_Name: Key, VauleOfKey: Value}' --profile dev --output table
-------------------------------
| DescribeTags |
+--------------+--------------+
| Key_Name | VauleOfKey |
+--------------+--------------+
| SSM_Managed | Stg |
| SSM_Managed | Stg |
| SSM_Managed | Stg |
| SSM_Managed | Stg |
| SSM_Managed | Stg |
| SSM_Managed | Stg |
| SSM_Managed | Stg |
| SSM_Managed | Stg |
| SSM_Managed | Stg |
+--------------+--------------+
To make it more explicit before you use above, you can use the following command to be more simplistic. it will return all tags with a value of "VALUE", regardless of the tag name.:
aws ec2 describe-tags --filters "Name=value,Values=VALUE"
If you want to filter the results further, you can include additional filters by adding them to the list in the --filters flag, separated by a comma. For example, to only return tags with a value of "VALUE" that are associated with security groups, you can use the following command:
aws ec2 describe-tags --filters "Name=value,Values=VALUE","Name=resource-type,Values=security-group"
EDIT:
Using describe-security-groups with all values *SecGrpec2* and then get the name of Security group these value belongs to.
$ aws ec2 describe-security-groups --filters "Name=tag-value,Values=*SecGrpec2*" --profile dev | jq -r '.SecurityGroups[].GroupName'
EC2 - SC101
EC2 - SD102
EC2 - ST101
Its not possible. You have to get all rules first, then then do filtering yourself.

AWS CLI - SSL check on every bucket

I'm just getting started with learning AWS CLI, I was wondering is there a way of checking pre-existing buckets and seeing if they have SSL enabled?
Many Thanks
buckets=`aws s3api list-buckets | jq -r '.Buckets[].Name'`
for bucket in $buckets
do
#echo "$bucket"
if aws s3api get-bucket-policy --bucket $bucket --query Policy --output text &> /dev/null; then
aws s3api get-bucket-policy --bucket $bucket --query Policy --output text | jq -r 'select(.Statement[].Condition.Bool."aws:SecureTransport"=="false")' | wc | awk {'print $1'}`

Get Value of Name Tag from EC2 instances using jq

I am trying to get the values of the Name tag of AWS EC2 instances using jq.
aws ec2 describe-instances | jq -r '.Reservations[].Instances[] | .Tags[] | select (.Key == "Name")'
But I am getting this error:
jq: error: Name/0 is not defined at <top-level>, line 1:
.Reservations[].Instances[] | .Tags[] | select (.Key == Name)
jq: 1 compile error
This is the json I'm trying to process:
{
"Reservations": [{
"Instances": [{
"AmiLaunchIndex": 0,
"ImageId": "ami-00c3c949f325a4149",
"InstanceId": "i-0c17052ee1c7113e5",
"Architecture": "x86_64",
"Tags": [{
"Key": "Name",
"Value": "bastion001"
},
{
"Key": "environment",
"Value": "stg-us-east"
}
]
}]
}]
}
How can I get the value of the Name tag from EC2 instances?
You don't need extra tools like jq to query the output. AWS CLI has JMESPath
built-in to help you do that.
aws ec2 describe-instances --query 'Reservations[*].Instances[*].Tags[?Key == `Name`].Value'
You could do something like this, this will get what you want from ec2 in a comma separated list
jq -r '.Reservations[].Instances[] \ | ((.Tags // empty) | from_entries) as $tags | [($tags.Name), ($tags.environment), .ImageId, .InstanceId, .AmiLaunchIndex] | #csv'
The .Tags // empty will ignore those with tags that do not exist if you are wondering.
Hope this helps, it Is correct you do not need jq to get these details, but this is how you do it with :)
There is a mismatch between the jq command-line expression shown in the Q and the error message. I would suggest that, at least until you have sorted things out, you put your jq program in a file, and invoke jq with the -f command-line option.

AWS s3 bucket bulk encryption on all s3 buckets

I am trying to bulk update all s3 buckets with default encryption to that i generate a json file using below command
aws s3api list-buckets --query "Buckets[].Name" >> s3.json
My results was names of all s3 buckets.
How do i pass in that json file into the command so i can enable default encryption.
I also tried below
aws s3api list-buckets --query 'Buckets[*].[Name]' --output text | xargs -I {} bash -c 'aws s3api put-bucket-encryption --bucket {} --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}''
But iam getting below error
Error parsing parameter '--server-side-encryption-configuration': Invalid JSON: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
JSON received: {Rule
aws s3api put-bucket-encryption --bucket bucketnames --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
I have tried below but it does not work.
aws s3api put-bucket-encryption \
--bucket value \
--server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}' \
--cli-input-json file://s3bucket.json
Pleas let me know how to update my command to enable default encryption.
Below is the code snippet to solve your problem:
# Check if bucket is SSE enabled and then encrypt using SSE AES256:
#!/bin/bash
#List all buckets and store names in a array.
arr=(`aws s3api list-buckets --query "Buckets[].Name" --output text`)
# Check the status before encryption:
for i in "${arr[#]}"
do
echo "Check if SSE is enabled for bucket -> ${i}"
aws s3api get-bucket-encryption --bucket ${i} | jq -r .ServerSideEncryptionConfiguration.Rules[0].ApplyServerSideEncryptionByDefault.SSEAlgorithm
done
# Encrypt all buckets in your account:
for i in "${arr[#]}"
do
echo "Encrypting bucket with SSE AES256 for -> ${i}"
aws s3api put-bucket-encryption --bucket ${i} --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
done
aws s3api list-buckets --query "Buckets[].Name" \
| jq .[] \
| xargs -I '{}' aws s3api put-bucket-encryption --bucket {} --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
Worked for me
If you wanted to do it in Python it would be something like this (not tested!):
import boto3
s3_client = boto3.client('s3')
response = s3_client.list_buckets()
for bucket in response['Buckets']
s3_client.put_bucket_encryption(
Bucket=bucket,
ServerSideEncryptionConfiguration={
'Rules': [
{
'ApplyServerSideEncryptionByDefault': {
'SSEAlgorithm': 'AES256'
}
},
]
}
)

Multipart Upload in AWS failed

I am trying to upload an 18GB .vhd file to S3.
I have done the following process :
aws s3api create-multipart-upload --bucket amcaebucket --key 'multipart/01'
where my-bucket is the name of my bucket.
I got the following response :
{
"Bucket": "amcaebucket",
"UploadId": "xxxxxxxxxxx",
"Key": "multipart/01"
}
My next command :
aws s3api upload-part --bucket amcaebucket --key 'multipart/01' --part-number 1 --upload-id "xxxxxxxxxxxx"
I got the following error :
An error occurred (AllAccessDisabled) when calling the UploadPart operation: All access to this object has been disabled.
What do I do??