AWS CLI: Eror parsing parameter - amazon-web-services

We are trying to create a listener rule with conditions-Host header in elastic load balancer by aws cli.
aws elbv2 create-rule
--listener-arn arn:aws:elasticloadbalancing:ap-south-1:123456789:listener/app/testing-alb/6sdfgsgs5fg45s4fg5sd \
--conditions test.com \
--priority 5 \
--actions arn:aws:elasticloadbalancing:ap-south-1:123456789:targetgroup/tgtest-1/hsdjif444225 \
--region ap-south-1 \
--output json
However, we got a error like this,
Error parsing parameter '--conditions': Expected: '=', received: 'EOF' for input:
test.com
^

If you want to do this inline, here is the correct syntax:
aws elbv2 create-rule \
--listener-arn arn:aws:elasticloadbalancing:ap-south-1:123456789:listener/app/testing-alb/6sdfgsgs5fg45s4fg5sd \
--conditions '[{"Field":"host-header","HostHeaderConfig":{"Values":["test.com"]}}]' \
--priority 5 \
--actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:ap-south-1:123456789:targetgroup/tgtest-1/hsdjif444225 \
--region ap-south-1 \
--output json

Related

AWS CLI create batch job

I have a bash script that creates a manifest.csv file example below, and then I use that to create an S3 batch job, I ran it using the console and it works so roles and permissions are correct, any help will be appreciated.
test-bucket-batch,Test/testing1.json \
test-bucket-batch,Test/testing2.json \
test-bucket-batch,Test/testing3.json \
test-bucket-batch,Test/testing4.json
aws s3control create-job \
--account-id $ACCOUNT_ID \
--region $REGION \
--confirmation-required \
--client-request-token $(uuidgen) \
--operation '{"S3PutObjectCopy":{"TargetResource":"arn:aws:s3:::'$BUCKET_NAME'/object_restore/"}}' \
--manifest '{"Spec":{"Format":"S3BatchOperations_CSV_20180820","Fields":["Bucket","Key"]},"Location":{"ObjectArn":"arn:aws:s3:::'$BUCKET_NAME'/object_restore/manifest.csv","ETag":'$ETAG'}}' \
--report '{"Bucket":"arn:aws:s3:::'$BUCKET_NAME'/object_restore/","Prefix":"final-reports", "Format":"Report_CSV_20180820","Enabled": true,"ReportScope":"AllTasks"}' \
--description 'S3 Copy Job' \
--priority 42 \
--role-arn $ROLE_ARN
ERROR: An error occurred (InvalidRequest) when calling the CreateJob operation: Request invalid
The issue was related to the prefix
aws s3control create-job \
--account-id $ACCOUNT_ID \
--region $REGION \
--confirmation-required \
--client-request-token $(uuidgen) \
--operation '{"S3PutObjectCopy":{"TargetResource":"arn:aws:s3:::'$BUCKET_NAME'","TargetKeyPrefix":"/object_restore/"}}' \
--manifest '{"Spec":{"Format":"S3BatchOperations_CSV_20180820","Fields":["Bucket","Key"]},"Location":{"ObjectArn":"arn:aws:s3:::'$BUCKET_NAME'/object_restore/manifest.csv","ETag":'$ETAG'}}' \
--report '{"Bucket":"arn:aws:s3:::'$BUCKET_NAME'","Prefix":"/object_restore/final-reports", "Format":"Report_CSV_20180820","Enabled": true,"ReportScope":"AllTasks"}' \
--description 'S3 Copy Job' \
--priority 42 \
--role-arn $ROLE_ARN
Note the fields changed where:
operation where TargetKeyPrefix was added: "TargetKeyPrefix":"/object_restore/"
report where Prefix was changed to:
"Prefix":"/object_restore/final-reports"

How do I specify multiple filters to aws ec2 describe-vpc-peering-connections?

I am trying to get a list of 'active' peering connections via aws ec2 ec2 describe-vpc-peering-connections. Here is what I have tried:
aws ec2 describe-vpc-peering-connections --region=eu-west-3 \
--filter 'Name=accepter-vpc-info.vpc-id,Values=vpc-xxxxxx Name=status-code,Values=active' \
--query 'VpcPeeringConnections[*].VpcPeeringConnectionId' --output text
But I get the error:
Error parsing parameter '--filters': Second instance of key "Values" encountered for input:
Name=accepter-vpc-info.vpc-id,Values=vpc-xxxxxxxx Name=status-code,Values=active
^
This is often because there is a preceeding "," instead of a space.
I think I need the , right? Is there something else I am getting wrong?
aws ec2 describe-vpc-peering-connections \
--region=eu-west-3 \
--filter Name=accepter-vpc-info.vpc-id,Values=vpc-xxxxxx \
--filter Name=status-code,Values=active \
--query 'VpcPeeringConnections[*].VpcPeeringConnectionId' \
--output text
OR
aws ec2 describe-vpc-peering-connections \
--region=eu-west-3 \
--filter 'Name=accepter-vpc-info.vpc-id,Values=vpc-xxxxxx' \
'Name=status-code,Values=active' \
--query 'VpcPeeringConnections[*].VpcPeeringConnectionId' \
--output text
Combining server-side and client-side filtering

AWS DynamoDB Local add global secondary index using awscli

Hello I try to execute the following command in order to add a Global Secondary Index to an existing table:
aws dynamodb update-table \
--region eu-west-1 \
--endpoint-url http://127.0.0.1:8000/ \
--table-name ssib_dev_assetsTable \
--attribute-definitions AttributeName=AssetGroup,AttributeType=S \
--global-secondary-index-updates \
Create="{IndexName=gsi_group,KeySchema=[{AttributeName=AssetGroup,KeyType=HASH}],Projection={ProjectionType=ALL}}" \
--provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=10 \
After + or - 10seconds I have the following response, without any explicit error message. I use https://hub.docker.com/r/cnadiminti/dynamodb-local/ to emulate my db.
An error occurred (InternalFailure) when calling the UpdateTable
operation (reached max retries: 9): The request processing has failed
because of an unknown error, exception or failure.
The parameter --global-secondary-index-updates should be a valid JSON string, which it is not in your case. It is also missing a mandatory key, ProvisionedThroughput.
https://docs.aws.amazon.com/cli/latest/reference/dynamodb/update-table.html
Here goes:
aws dynamodb update-table \
--region eu-west-1 \
--endpoint-url http://127.0.0.1:8000 \
--table-name ssib_dev_assetsTable \
--attribute-definitions AttributeName=AssetGroup,AttributeType=S \
--global-secondary-index-updates '[{"Create":{"IndexName":"gsi_group","KeySchema":[{"AttributeName":"AssetGroup","KeyType":"HASH"}],"Projection":{"ProjectionType":"ALL"},"ProvisionedThroughput":{"ReadCapacityUnits":5,"WriteCapacityUnits":5}}}]'
By the way, you should probably use the official image instead of some image by some random user docker user: https://hub.docker.com/r/amazon/dynamodb-local/

Pipe file directly to AWS SSM parameter store?

Just curious on how do i pipe file directly to aws ssm parameter store? e.g.
# Put into ssm parameter store
cat my_github_private.key | aws ssm put-parameter --region ap-southeast-1 --name MY_GITHUB_PRIVATE_KEY --type SecureString --key-id alias/aws/ssm --value ???
# And read it back
aws ssm get-parameter --region ap-southeast-1 --name MY_GITHUB_PRIVATE_KEY --with-decryption --query Parameter.Value --output text > my_github_private.key.1
# Two should be identical
diff my_github_private.key my_github_private.key.1
Rather than taking the value from stdin can you directly add to the command line arguments?
aws ssm put-parameter \
--region ap-southeast-1 \
--name MY_GITHUB_PRIVATE_KEY \
--type SecureString \
--key-id alias/aws/ssm \
--value file://my_github_private.key
Note: --value "$(cat my_github_private.key)" also works
IF you are using terraform:
data "local_file" "yourkeyfile" {
filename = "keys/yourkey.pem"
}
resource "aws_ssm_parameter" "aresource-name-for-your-key" {
name = "/the/ssm/key"
type = "SecureString"
value = "${data.local_file.yourkeyfile.content}"
}
Remember to crypt yourkey.pem for example using blackbox
#tkwargs,
how to get only value from key.json file and example
aws ssm put-parameter \
--region ap-southeast-1 \
--name MY_GITHUB_PRIVATE_KEY \
--type SecureString \
--key-id alias/aws/ssm \
--value "$(cat my_github_private.json file and get value only)"

Execution failed due to configuration error: Malformed Lambda proxy response

for reference you can visit this aws issue i've created:
https://github.com/aws/aws-cli/issues/3118
I use this AWS CLI commands below but it is inside to *.sh file
no problem with the script it successfully run
NOTE: i manually create API
```
remove GET method
aws apigateway delete-method \
--rest-api-id 2132132 \
--resource-id 8998989 \
--http-method GET \
/dev/null 2>&1 && echo '-> [aws] APIGateway GET method removed'
remove permission first
aws lambda remove-permission \
--function-name function_main \
--statement-id function_main \
/dev/null 2>&1 && echo '-> [aws] APIGateway permission removed'
and then add method
aws apigateway put-method \
--rest-api-id 2132132 \
--resource-id 8998989 \
--http-method GET \
--authorization-type 'NONE' \
--region us-east-1 \
/dev/null 2>&1 && echo '-> [aws] APIGateway GET method created'
and add integration.
aws apigateway put-integration \
--region us-east-1 \
--rest-api-id 2132132 \
--resource-id 8998989 \
--http-method GET \
--type AWS_PROXY \
--integration-http-method GET \
--passthrough-behavior WHEN_NO_MATCH \
--uri "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:23645667:function:function_main/invocations" \
/dev/null 2>&1 && echo '-> [aws] APIGateway integration added'
and add method response
aws apigateway put-method-response \
--rest-api-id 2132132 \
--resource-id 8998989 \
--http-method GET \
--status-code 200 \
--response-models "{\"application/json\": \"Empty\"}" \
/dev/null 2>&1 && echo '-> [aws] APIGateway GET method response created'
and then add
aws lambda add-permission \
--function-name function_main \
--statement-id 4454854604c23688a9f42907de4d18ec \
--action "lambda:InvokeFunction" \
--principal apigateway.amazonaws.com \
--source-arn "arn:aws:execute-api:us-east-1:23645667:2132132/*/GET/" \
/dev/null 2>&1 && echo '-> [aws] APIGateway permission added'
```
but the output is this, in method response I can't see HTTP STATUS: Proxy or just 'Select an Integration response' unlike when i manually add method and integration there is (please see below image difference)
ERROR
WORKING
AWS CLI Versions : aws-cli/1.14.32 Python/2.7.10 Darwin/17.3.0 botocore/1.8.36
i just want to share answer from github.com/issues by https://github.com/kyleknap
>
#XanderDwyl I think you need to include an apigateway put-integration-response command in your shell script even if you are doing a proxy integration. We had to do something similar in old version of chalice. I would recommend checking out some of the source code. It is Python, but the parameters and values map directly back to CLI commands and parameters. So it should be straightforward to figure out what may be missing. Let us know if that helps.