How to input shell parametes in AWS CLI - amazon-web-services

I've got two shell parameters
AID="subnet-00000"
BID="subnet-11111"
And I can't execute below statement.
aws rds create-db-subnet-group \
--db-subnet-group-name dbsubnet-$service_name \
--db-subnet-group-description "dbsubnet-$service_name" \
--subnet-ids '[$AID, $BID]'
The error message is saying that
Expecting value: line 1 column 2 (char 1)
How can I put my parameters into aws cli statement?

Since you've used single-quote, the variables wont be resolved. Also you can skip square brackets:
aws rds create-db-subnet-group \
--db-subnet-group-name dbsubnet-$service_name \
--db-subnet-group-description "dbsubnet-$service_name" \
--subnet-ids $AID $BID

Related

unable to create a gcloud alert policy in command line with multiple conditions

I am trying to create a single alert policy for Cloud-Sql instance_state through gcloud with multiple conditions.
If the instance is in "RUNNABLE" OR "FAILED" state for more than 5 minutes, then a alert should be triggerred. I was able to create that in console and below is the screenshot:
Now I try the same using the command line and give this gcloud command:
gcloud alpha monitoring policies create \
--display-name='Test Database State Alert ('$PROJECTID')' \
--condition-display-name='Instance is not running for 5 minutes'\
--notification-channels="x23234dfdfffffff" \
--aggregation='{"alignmentPeriod": "60s","perSeriesAligner": "ALIGN_COUNT_TRUE"}' \
--condition-filter='metric.type="cloudsql.googleapis.com/database/instance_state" AND resource.type="cloudsql_database" AND (metric.labels.state = "RUNNABLE")'
OR 'metric.type="cloudsql.googleapis.com/database/instance_state" AND resource.type="cloudsql_database" AND (metric.labels.state = "FAILED")' \
--duration='300s' \
--if='> 0.0' \
--trigger-count=1 \
--combiner='OR' \
--documentation='The rule "${condition.display_name}" has generated this alert for the "${metric.display_name}".' \
--project="$PROJECTID" \
--enabled
I am getting the error below in the OR part of the condition:
ERROR: (gcloud.alpha.monitoring.policies.create) unrecognized arguments:
OR
metric.type="cloudsql.googleapis.com/database/instance_state" AND resource.type="cloudsql_database" AND (metric.labels.state = "FAILED")
Even if i put ( ) over the condition still it fails, also the || operator also fails.
Can anyone please tell me the correct gcloud command for this? Also i want the structure of the alert policy to be similar to the one created in cloud-console as shown above
Thanks
I was able to use gcloud alpha monitoring policies conditions create to append additional conditions.
gcloud alpha monitoring policies create \
--notification-channels=projects/qwiklabs-gcp-04-d822dd6cd419/notificationChannels/2510735656842641871 \
--aggregation='{"alignmentPeriod": "60s","perSeriesAligner": "ALIGN_MEAN"}' \
--condition-display-name='CPU Utilization >0.95 for 1m'\
--condition-filter='metric.type="compute.googleapis.com/instance/cpu/utilization" resource.type="gce_instance"' \
--duration='1m' \
--if='> 0.95' \
--display-name=' alert on spikes or consistantly high cpu' \
--combiner='OR'
gcloud alpha monitoring policies list --format='value(name,displayName)'
gcloud alpha monitoring policies conditions create \
projects/qwiklabs-gcp-04-d822dd6cd419/alertPolicies/1712202834227136574 \
--aggregation='{"alignmentPeriod": "60s","perSeriesAligner": "ALIGN_MEAN"}' \
--condition-display-name='CPU Utilization >0.80 for 10m'\
--condition-filter='metric.type="compute.googleapis.com/instance/cpu/utilization" resource.type="gce_instance"' \
--duration='10m' \
--if='> 0.80'
Duplicate --condition-filter clauses did not work for me. YMMV.
From the docs gcloud alpha monitoring policies create, it appears that you can specify repeated (!) occurrences of:
[--aggregation=AGGREGATION --condition-display-name=CONDITION_DISPLAY_NAME --condition-filter=CONDITION_FILTER --duration=DURATION --if=IF_VALUE --trigger-count=TRIGGER_COUNT | --trigger-percent=TRIGGER_PERCENT]
So I think you need to duplicate your --condition-filter with the --combiner="OR", i.e.
gcloud alpha monitoring policies create \
--display-name='Test Database State Alert ('$PROJECTID')' \
--notification-channels="x23234dfdfffffff" \
--aggregation='{"alignmentPeriod": "60s","perSeriesAligner": "ALIGN_COUNT_TRUE"}' \
--condition-display-name='RUNNABLE'\
--condition-filter='metric.type="cloudsql.googleapis.com/database/instance_state" AND resource.type="cloudsql_database" AND (metric.labels.state = "RUNNABLE")'
--duration='300s' \
--if='> 0.0' \
--trigger-count=1 \
--aggregation='{"alignmentPeriod": "60s","perSeriesAligner": "ALIGN_COUNT_TRUE"}' \
--condition-display-name='FAILED'\
--condition-filter='metric.type="cloudsql.googleapis.com/database/instance_state" AND resource.type="cloudsql_database" AND (metric.labels.state = "FAILED")' \
--duration='300s' \
--if='> 0.0' \
--trigger-count=1 \
--combiner='OR' \
--documentation='The rule "${condition.display_name}" has generated this alert for the "${metric.display_name}".' \
--project="$PROJECTID" \
--enabled

How to get the full results of a query to CSV file using AWS/Athena from CLI?

I need to download a full table content that I have on my AWS/Glue/Catalog using AWS/Athena. At the moment what I do it is running a select * from my_table from the Dashboard and saving the result locally as CSV always from Dashboard. Is there a way to get the same result using AWS/CLI?
From the documentation I can see https://docs.aws.amazon.com/cli/latest/reference/athena/get-query-results.html but it is not quite what I need.
You can run an Athena query with AWS CLI using the aws athena start-query-execution API call. You will then need to poll with aws athena get-query-execution until the query is finished. When that is the case the result of that call will also contain the location of the query result on S3, which you can then download with aws s3 cp.
Here's an example script:
#!/usr/bin/env bash
region=us-east-1 # change this to the region you are using
query='SELECT NOW()' # change this to your query
output_location='s3://example/location' # change this to a writable location
query_execution_id=$(aws athena start-query-execution \
--region "$region" \
--query-string "$query" \
--result-configuration "OutputLocation=$output_location" \
--query QueryExecutionId \
--output text)
while true; do
status=$(aws athena get-query-execution \
--region "$region" \
--query-execution-id "$query_execution_id" \
--query QueryExecution.Status.State \
--output text)
if [[ $status != 'RUNNING' ]]; then
break
else
sleep 5
fi
done
if [[ $status = 'SUCCEEDED' ]]; then
result_location=$(aws athena get-query-execution \
--region "$region" \
--query-execution-id "$query_execution_id" \
--query QueryExecution.ResultConfiguration.OutputLocation \
--output text)
exec aws s3 cp "$result_location" -
else
reason=$(aws athena get-query-execution \
--region "$region" \
--query-execution-id "$query_execution_id" \
--query QueryExecution.Status.StateChangeReason \
--output text)
echo "Query $query_execution_id failed: $reason" 1>&2
exit 1
fi
If your primary work group has an output location, or you want to use a different work group which also has a defined output location you can modify the start-query-execution call accordingly. Otherwise you probably have an S3 bucket called aws-athena-query-results-NNNNNNN-XX-XXXX-N that has been created by Athena at some point and that is used for outputs when you use the UI.
You cannot save results from the AWS CLI, but you can Specify a Query Result Location and Amazon Athena will automatically save a copy of the query results in an Amazon S3 location that you specify.
You could then use the AWS CLI to download that results file.

override parameters in parameter file for cloudformation

lets say i have all parameters needed to create a cloudformation stack in a json file but want to override some parameters from the parameters file..is this possible?
aws cloudformation create-stack \
--stack-name sample-stack \
--template-body file://sample-stack.yaml \
--parameters file://sample-stack.json \
--capabilities CAPABILITY_IAM \
--disable-rollback \
--region us-east-1 \
--output json && \
aws cloudformation wait stack-create-complete \
--stack-name sample-stack
so lets say there are like 10 parameters in sample-stack.json file BUT i have like 2 parameters i want to override from that file.
Is this possible?
Thanks
This isn't available in the AWS CLI right now, but there is a feature request on GitHub. For now you'll need to script something to generate your overrides prior to creating the stack. Another potential option is to store your values in something that you can dynamically reference, such as Parameter Store, and update them via the API prior to stack creation.
If you want to update a stack and specify only the list of parameters that changed, you can have a look at this shell script that I wrote.
Usage:
▶ bash update_stack.sh -h
Usage: update_stack.sh [-h] STACK_NAME KEY1=VAL1 [KEY2=VAL2 ...]
Updates CloudFormation stacks based on parameters passed here as key=value pairs. All
other parameters are based on existing values.
To solve your problem, you could borrow the edit() function:
PARAMS='sample-stack.json'
edit() {
local key value pair
for pair in "$#" ; do
IFS='=' read -r key value <<< "$pair"
jq --arg key "$key" \
--arg value "$value" \
'(.[] | select(.ParameterKey==$key)
| .ParameterValue) |= $value' \
"$PARAMS" > x ; mv x "$PARAMS"
done
}
cp $PARAMS $PARAMS.bak
edit param1=newval1 param2=newval2
And then create your stack as normal.
make all values in the files as the variables, and use another script pass the default values or overwrite them.
For example, i have my jason files sample-stack.json like following:
[
{
"ParameterKey": "InstanceType",
"ParameterValue": "${instance_type}"
},
{
"ParameterKey": "DesiredSize",
"ParameterValue": "${ASG_DESIRED_Number}"
}
]
in the script file, run following commands to replace
instance_type=t3.small
envsubst < "${IN_FILENAME}" > "${OUT_FILENAME}"
what you need to do is to replace those variables you need. for those don't need change, the default value will be passed in.

How to add an index from command line to DynamoDB after table was created

Could you please point me to an appropriate documentation topic or provide an example how to add index to DynamoDB as far as I couldn't find any related info.
According to this blog: http://aws.amazon.com/blogs/aws/amazon-dynamodb-update-online-indexing-reserved-capacity-improvements/?sc_ichannel=em&sc_icountry=global&sc_icampaigntype=launch&sc_icampaign=em_130867660&sc_idetail=em_1273527421&ref_=pe_411040_130867660_15 it seems to be possible to do it with UI, however there are no mentions about CLI interface usages.
Thanks in advance,
Yevhenii
The aws command has help for every level of subcommand. For example, you can run aws help to get a list of all service names and discover the name dynamodb. Then you can aws dynamodb help to find the list of DDB commands and find that update-table is a likely culprit. Finally, aws dynamodb update-table help shows you the flags needed to add a global secondary index.
The AWS CLI documentation is really poor and lacks examples. Evidently AWS is promoting the SDK or the console.
This should work for updating
aws dynamodb update-table --table-name Test \
--attribute-definitions AttributeName=City,AttributeType=S AttributeName=State,AttributeType=S \
--global-secondary-index-updates \
"Create={"IndexName"="state-index", "KeySchema"=[ {"AttributeName"="State", "KeyType"="HASH" }], "Projection"={"ProjectionType"="INCLUDE", "NonKeyAttributes"="City"}, "ProvisionedThroughput"= {"ReadCapacityUnits"=1, "WriteCapacityUnits"=1} }"
Here's a shell function to do this that sets the R/W caps, and optionally handles --global-secondary-index-updates if an index name is provided
dynamodb_set_caps() {
# [ "$1" ] || fail_exit "Missing table name"
# [ "$3" ] || fail_exit "Missing read capacity"
# [ "$3" ] || fail_exit "Missing write capacity"
if [ "$4" ] ; then
aws dynamodb update-table --region $region --table-name ${1} \
--provisioned-throughput ReadCapacityUnits=${2},WriteCapacityUnits=${3} \
--global-secondary-index-updates \
"Update={"IndexName"="${4}", "ProvisionedThroughput"= {"ReadCapacityUnits"=${2}, "WriteCapacityUnits"=${3}} }"
else
aws dynamodb update-table --region $region --table-name ${1} \
--provisioned-throughput ReadCapacityUnits=${2},WriteCapacityUnits=${3}
fi
}
Completely agree that the aws docs are lacking in this area
Here is reference for creating a global secondary index:
https://docs.aws.amazon.com/pt_br/amazondynamodb/latest/developerguide/getting-started-step-6.html
However the example only provides the creation of an index for a single primary key.
This code helped me to create a global secondary index for a composite primary key:
aws dynamodb update-table \
--table-name YourTableName \
--attribute-definitions AttributeName=GSI1PK,AttributeType=S \
AttributeName=GSI1SK,AttributeType=S \
AttributeName=createdAt,AttributeType=S \
--global-secondary-index-updates \
"[{\"Create\":{\"IndexName\": \"GSI1\",\"KeySchema\":[{\"AttributeName\":\"GSI1PK\",\"KeyType\":\"HASH\"},{\"AttributeName\":\"GSI1SK\",\"KeyType\":\"RANGE\"}], \
\"ProvisionedThroughput\": {\"ReadCapacityUnits\": 5, \"WriteCapacityUnits\": 5 },\"Projection\":{\"ProjectionType\":\"ALL\"}}}]" --endpoint-url http://localhost:8000
A note in the bottom line considers that you are creating this index in your local database. If not, just delete it.

How to specify the volume size of the root instance storage?

An AWS CLI command for requesting a spot instance that I currently use looks like this:
aws ec2 request-spot-instances \
--region eu-west-1 \
--spot-price 0.1 \
--launch-specification "{ \
\"KeyName\": \"aws\", \
\"ImageId\": \"$IMGID_DIGITS\", \
\"InstanceType\": \"g2.2xlarge\" , \
\"SecurityGroupIds\": [\"$SGID\"] \
}"
How / Where do I specify the size of the root instance storage to be 16 GB instead of the usual 8 GB?
I would look at BlockDeviceMappings.Ebs.VolumeSize.
Reference: http://docs.aws.amazon.com/cli/latest/reference/ec2/request-spot-instances.html