lets say i have all parameters needed to create a cloudformation stack in a json file but want to override some parameters from the parameters file..is this possible?
aws cloudformation create-stack \
--stack-name sample-stack \
--template-body file://sample-stack.yaml \
--parameters file://sample-stack.json \
--capabilities CAPABILITY_IAM \
--disable-rollback \
--region us-east-1 \
--output json && \
aws cloudformation wait stack-create-complete \
--stack-name sample-stack
so lets say there are like 10 parameters in sample-stack.json file BUT i have like 2 parameters i want to override from that file.
Is this possible?
Thanks
This isn't available in the AWS CLI right now, but there is a feature request on GitHub. For now you'll need to script something to generate your overrides prior to creating the stack. Another potential option is to store your values in something that you can dynamically reference, such as Parameter Store, and update them via the API prior to stack creation.
If you want to update a stack and specify only the list of parameters that changed, you can have a look at this shell script that I wrote.
Usage:
▶ bash update_stack.sh -h
Usage: update_stack.sh [-h] STACK_NAME KEY1=VAL1 [KEY2=VAL2 ...]
Updates CloudFormation stacks based on parameters passed here as key=value pairs. All
other parameters are based on existing values.
To solve your problem, you could borrow the edit() function:
PARAMS='sample-stack.json'
edit() {
local key value pair
for pair in "$#" ; do
IFS='=' read -r key value <<< "$pair"
jq --arg key "$key" \
--arg value "$value" \
'(.[] | select(.ParameterKey==$key)
| .ParameterValue) |= $value' \
"$PARAMS" > x ; mv x "$PARAMS"
done
}
cp $PARAMS $PARAMS.bak
edit param1=newval1 param2=newval2
And then create your stack as normal.
make all values in the files as the variables, and use another script pass the default values or overwrite them.
For example, i have my jason files sample-stack.json like following:
[
{
"ParameterKey": "InstanceType",
"ParameterValue": "${instance_type}"
},
{
"ParameterKey": "DesiredSize",
"ParameterValue": "${ASG_DESIRED_Number}"
}
]
in the script file, run following commands to replace
instance_type=t3.small
envsubst < "${IN_FILENAME}" > "${OUT_FILENAME}"
what you need to do is to replace those variables you need. for those don't need change, the default value will be passed in.
Related
I would like to update the cloudfront distribution with the latest lambda#edge function using CLI.
I saw this documentation https://docs.aws.amazon.com/cli/latest/reference/cloudfront/update-distribution.html
but could not figure out how to update the lambda arn only.
Can some one help
Here is the script, that is doing exactly that. It is implemented based on #cloudbud answer. There is no argument checking. It would be executed like this: ./script QF234ASD342FG my-lambda-at-edge-function us-east-1. In my case, the execution time is less than 10 sec. See update-distribution for details.
#!/bin/bash
set -xeuo pipefail
export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin
distribution_id="$1"
function_name="$2"
region="$3"
readonly lambda_arn=$(
aws lambda list-versions-by-function \
--function-name "$function_name" \
--region "$region" \
--query "max_by(Versions, &to_number(to_number(Version) || '0'))" \
| jq -r '.FunctionArn'
)
readonly tmp1=$(mktemp)
readonly tmp2=$(mktemp)
aws cloudfront get-distribution-config \
--id "$distribution_id" \
> "$tmp1"
readonly etag=$(jq -r '.ETag' < "$tmp1")
cat "$tmp1" \
| jq '(.DistributionConfig.CacheBehaviors.Items[] | select(.PathPattern=="dist/sxf/*") | .LambdaFunctionAssociations.Items[] | select(.EventType=="origin-request") | .LambdaFunctionARN ) |= "'"$lambda_arn"'"' \
| jq '.DistributionConfig' \
> "$tmp2"
# the dist config has to be in the file
# and be referred in specific way.
aws cloudfront update-distribution \
--id "$distribution_id" \
--distribution-config "file://$tmp2" \
--if-match "$etag"
rm -f "$tmp1" "$tmp2"
could not figure out how to update the lambda arn only.
The link that you provided explains the process:
The update process includes getting the current distribution configuration, updating the XML document that is returned to make your changes, and then submitting an UpdateDistribution request to make the updates.
This means that you can't just update lambda arn directly. You have:
Call get-distribution-config to obtain full current configuration.
Change the lambda arn in the configuration data obtained.
Upload the entire new configuration using update-distribution.
The process requires extra attention which is also explained in the docs under Warning:
You must strip out the ETag parameter that is returned.
Additional fields are required when you update a distribution.
and more.
The process is indeed complex. Thus if you can I would recommend trying this on some test/dummy CloudFront distribution rather than directly on the production version.
Something like this:
#!/bin/bash
set -x
TEMPDIR=$(mktemp -d)
CONFIG=$(aws cloudfront get-distribution-config --id CGSKSKLSLSM)
ETAG=$(echo "${CONFIG}" | jq -r '.ETag')
echo "${CONFIG}" | jq '.DistributionConfig' > ${TEMPDIR}/orig.json
echo "${CONFIG}" | jq '.DistributionConfig | .DefaultCacheBehavior.LambdaFunctionAssociations.Items[0].LambdaFunctionARN= "arn:aws:lambda:us-east-1:xxxxx:function:test-func:3"' > ${TEMPDIR}/updated.json
aws cloudfront update-distribution --id CGSKSKLSLSM --distribution-config file://${TEMPDIR}/updated.json --if-match "${ETAG}"
I can get the details with
$ aws lambda get-function --function-name random_number
{
"Configuration": {
"FunctionName": "random_number",
"FunctionArn": "arn:aws:lambda:us-east-2:193693970645:function:random_number",
"Runtime": "ruby2.5",
"Role": "arn:aws:iam::193693970645:role/service-role/random_number-role-8cy8a1a7",
...
But how can get just a couple of fields like function name ?
I tried:
$ aws lambda get-function --function-name random_number --query "Configuration[*].[FunctionName]"
but I get null
Your overall approach is correct, you just need to adjust the query:
$ aws lambda get-function --function-name random_number \
--query "Configuration.FunctionName" --output text
I also added a parameter to convert the result to text, which makes processing a bit easier.
Here is a simple awk (standard Linux gnu awk) script that does the trick: Extract the values of quoted field #3, only for line having /FunctionName/.
awk 'BEGIN {FPAT="\"[^\"]+"}/FunctionName/{print substr($3,2)}'
Piped with your initial command:
$ aws lambda get-function --function-name random_number | awk 'BEGIN {FPAT="\"[^\"]+"}/FunctionName/{print substr($3,2)}'
One way to achieve that is by using jq.
therefore, the output must be JSON.
From the docs :
jq is like sed for JSON data - you can use it to slice and filter and
map and transform structured data with the same ease that sed, awk,
grep and friends let you play with text.
Usage example :
aws lambda get-function --function-name test --output json | jq -r '.Configuration.FunctionName'
Use get-function-configuration as in the following:
aws lambda get-function-configuration --function-name MyFunction --query "[FunctionName]"
Can someone help me out in handling a dynamic "ParameterValue" in parameter.json file.
I'm running "cloudformation create-stack" and passing in --parameters a parameter.json file, there are few "ParameterValue" in the file that needs to be dynamic for example, timestamp and appending index values from loop etc... so, how can i modify the parameters.json file to handle dynamic values.
Alternate way i could go with is to just not use the parameters.json file and pass in the key, value like below to the create-stack command inside the loop in the script,
--parameters ParameterKey="XYZ",ParameterValue="${someval}${index}"
I would create parameters.json.template file to hold the values in their parameterized form like you show:
[
{
"ParameterKey": "XYZ",
"ParameterValue": "{someval}{index}"
},
{
"ParameterKey": "ABC",
"ParameterValue": "staticval-{suffix}"
}
]
I am assuming you are doing this on the cli, based on the use of the --parameters flag. In that case, I would create a script to merge the template file with the values (into a generated file) and call the create-stack cli command after that.
Something like this on linux:
#! /bin/bash
# create output file from template
cp templates/parameters.json.template generated/parameters.json
# merge dynamic values into templated file
sed -i "s/{someval}/$SOME_VAL/g" generated/parameters.json
sed -i "s/{index}/$INDEX/g" generated/parameters.json
sed -i "s/{suffix}/$SUFFIX/g" generated/parameters.json
aws cloudformation create-stack ... --parameters generated/parameters.json ...
This of course assumes your script has access to your dynamic values.
I have been following advice on this post I've created an API key on AWS and set my POST method to require an API key.
I have also setup a usage plan and linked that API key to it.
My API key is enabled
When I have been testing requests with postman, my request still goes through without any additional headers.
I was expecting no requests to go through unless I had included a header in my request like this "x-api-key":"my_api_key"
Do I need to change the endpoint I send requests to in postman for them to go through API Gateway?
If you need to enable API key for each method then needs to be enabled API key required true for each method.
Go to resources--> select your resource and method, go to Method Request and set "API Key Required" to true.
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-key-source.html
If you want, I've made the following script to enable the API key on every method for certain API. It requires the jq tool for advanced JSON parsing.
You can find the script to enable the API key for all methods of an API Gateway API on this gist.
#!/bin/bash
api_gateway_method_enable_api_key() {
local api_id=$1
local method_id=$2
local method=$3
aws --profile "$profile" --region "$region" \
apigateway update-method \
--rest-api-id "$api_id" \
--resource-id "$method_id" \
--http-method "$method" \
--patch-operations op="replace",path="/apiKeyRequired",value="true"
}
# change this to 1 in order to execute the update
do_update=0
profile=your_profile
region=us-east-1
id=your_api_id
tmp_file="/tmp/list_of_endpoint_and_methods.json"
aws --profile $profile --region $region \
apigateway get-resources \
--rest-api-id $id \
--query 'items[?resourceMethods].{p:path,id:id,m:resourceMethods}' >"$tmp_file"
while read -r line; do
path=$(jq -r '.p' <<<"$line")
method_id=$(jq -r '.id' <<<"$line")
echo "$path"
# do not update OPTIONS method
for method in GET POST PUT DELETE; do
has_method=$(jq -r ".m.$method" <<<"$line")
if [ "$has_method" != "null" ]; then
if [ $do_update -eq 1 ]; then
api_gateway_method_enable_api_key "$id" "$method_id" "$method"
echo " $method method changed"
else
echo " $method method will be changed"
fi
fi
done
done <<<"$(jq -c '.[]' "$tmp_file")"
Could you please point me to an appropriate documentation topic or provide an example how to add index to DynamoDB as far as I couldn't find any related info.
According to this blog: http://aws.amazon.com/blogs/aws/amazon-dynamodb-update-online-indexing-reserved-capacity-improvements/?sc_ichannel=em&sc_icountry=global&sc_icampaigntype=launch&sc_icampaign=em_130867660&sc_idetail=em_1273527421&ref_=pe_411040_130867660_15 it seems to be possible to do it with UI, however there are no mentions about CLI interface usages.
Thanks in advance,
Yevhenii
The aws command has help for every level of subcommand. For example, you can run aws help to get a list of all service names and discover the name dynamodb. Then you can aws dynamodb help to find the list of DDB commands and find that update-table is a likely culprit. Finally, aws dynamodb update-table help shows you the flags needed to add a global secondary index.
The AWS CLI documentation is really poor and lacks examples. Evidently AWS is promoting the SDK or the console.
This should work for updating
aws dynamodb update-table --table-name Test \
--attribute-definitions AttributeName=City,AttributeType=S AttributeName=State,AttributeType=S \
--global-secondary-index-updates \
"Create={"IndexName"="state-index", "KeySchema"=[ {"AttributeName"="State", "KeyType"="HASH" }], "Projection"={"ProjectionType"="INCLUDE", "NonKeyAttributes"="City"}, "ProvisionedThroughput"= {"ReadCapacityUnits"=1, "WriteCapacityUnits"=1} }"
Here's a shell function to do this that sets the R/W caps, and optionally handles --global-secondary-index-updates if an index name is provided
dynamodb_set_caps() {
# [ "$1" ] || fail_exit "Missing table name"
# [ "$3" ] || fail_exit "Missing read capacity"
# [ "$3" ] || fail_exit "Missing write capacity"
if [ "$4" ] ; then
aws dynamodb update-table --region $region --table-name ${1} \
--provisioned-throughput ReadCapacityUnits=${2},WriteCapacityUnits=${3} \
--global-secondary-index-updates \
"Update={"IndexName"="${4}", "ProvisionedThroughput"= {"ReadCapacityUnits"=${2}, "WriteCapacityUnits"=${3}} }"
else
aws dynamodb update-table --region $region --table-name ${1} \
--provisioned-throughput ReadCapacityUnits=${2},WriteCapacityUnits=${3}
fi
}
Completely agree that the aws docs are lacking in this area
Here is reference for creating a global secondary index:
https://docs.aws.amazon.com/pt_br/amazondynamodb/latest/developerguide/getting-started-step-6.html
However the example only provides the creation of an index for a single primary key.
This code helped me to create a global secondary index for a composite primary key:
aws dynamodb update-table \
--table-name YourTableName \
--attribute-definitions AttributeName=GSI1PK,AttributeType=S \
AttributeName=GSI1SK,AttributeType=S \
AttributeName=createdAt,AttributeType=S \
--global-secondary-index-updates \
"[{\"Create\":{\"IndexName\": \"GSI1\",\"KeySchema\":[{\"AttributeName\":\"GSI1PK\",\"KeyType\":\"HASH\"},{\"AttributeName\":\"GSI1SK\",\"KeyType\":\"RANGE\"}], \
\"ProvisionedThroughput\": {\"ReadCapacityUnits\": 5, \"WriteCapacityUnits\": 5 },\"Projection\":{\"ProjectionType\":\"ALL\"}}}]" --endpoint-url http://localhost:8000
A note in the bottom line considers that you are creating this index in your local database. If not, just delete it.