Alright, so I'm trying to programmatically store my Serverless generated API endpoint in parameter store for another project to ingest.
Just for an example, I'm going to try to store google.com.
aws ssm put-parameter --name /dev/someStore --value https://google.com --type String
This fails, understandably so.
Error parsing parameter '--value': Unable to retrieve https://google.com: received non 200 status code of 301
However, if I wrap the URL in quotes...
aws ssm put-parameter --name /dev/someStore --value "https://google.com" --type String
It still fails with the same error. Is there any way to stop the cli from trying to evaluate the URL and just save the goddamn string?
This is happening because of a questionable behavior by awscli v1. When it sees a URL, it invokes an HTTP GET for a result. This does not happen in awscli v2.
You can work around this behavior as follows:
aws ssm put-parameter --cli-input-json '{
"Name": "/dev/someStore",
"Value": "https://google.com",
"Type": "String"
}'
Or you can store the JSON in a file named params.json and invoke:
aws ssm put-parameter --cli-input-json file://params.json
The underlying issue was reported at aws/aws-cli/issues/2507.
By default AWS CLI follows any string parameters that start with https:// or http://. These URLs are fetched, and the downloaded content is used as the parameter instead of URL.
To make CLI not treat strings prefixed with https:// or http:// any differently than normal string parameters run:
aws configure set cli_follow_urlparam false
cli_follow_urlparam controls whether or not the CLI will attempt to follow URL links in parameters that start with either prefix https:// or http://.
See https://docs.aws.amazon.com/cli/latest/topic/config-vars.html
Problem:
aws ssm put-parameter --name /config/application/some-url --value http://google.com --type String --region eu-central-1 --overwrite
Error parsing parameter '--value': Unable to retrieve http://google.com: received non 200 status code of 301
Solution:
aws configure set cli_follow_urlparam false
aws ssm put-parameter --name /config/application/some-url --value http://google.com --type String --region eu-central-1 --overwrite
{
"Version": 1
}
The GitHub discussion on this topic, linked by #jarmod, also had another solution for this. I'll replicate it here for others to avoid scanning through the whole thread.
Add the following to your ~/.aws/config along with any other settings present.
[default]
cli_follow_urlparam = false
P.S. Seems that it is also mentioned in the AWS documentation under "Loading Parameters from a File" section.
Another option to make this work is to not include the https protocol in the value and just the domain name or the path. After retrieval add the protocol appropriate. some times we wanted to use https or http or even ssh. Take git url for example. Multiple protocols for accessing the resource with appropriate ports where the path is the required value
To complement #jarmod answers, here is an example showing
how one can deal with Overwrite file, url in bash variable and making the json multi-line string.
URL='https://www.some.url.com'
json_params='{'
json_params+='"Name": "/param/path",'
json_params+='"Value": "'${URL}'",'
json_params+='"Type": "String",'
json_params+='"Overwrite": true'
json_params+='}'
aws ssm put-parameter \
--cli-input-json "${json_params}"
Related
I am new to AWS. I am working on integrating SSM parameters to store database passwords and use the same at the time of cloud formation.
We observed a issue with SSM Parameters value having special characters at the beginning of the string.
For example, if the password is Test#123, its working fine. But if the password is #Test!123 then it’s not working.
Is there any work around for the same.
Alright, I think I found the solution to my problem. I have a password like this "complicated!word+=!here!help+", and this is how I am able to escape it:
aws ssm put-parameter --name /config/my-api_alpha/my-db.jdbc.password --value “complicated\!word+=\!here\!help+” --type SecureString --key-id arn:aws:kms:us-east-1:1234567890:key/this-is-a-kms-keyId
The double quotes are optional; this produces the same result:
aws ssm put-parameter --name /config/my-api_alpha/my-db.jdbc.password --value complicated\!word+=\!here\!help+ --type SecureString --key-id arn:aws:kms:us-east-1:1234567890:key/this-is-a-kms-keyId
I resolved this by enclosing the password beginning with special characters in double quotes. For example
#Test!123
I am trying to update my rest api via the aws cli and I don't get the results I desire. I am running the commands
aws apigateway put-rest-api --rest-api-id XXXXXXXXXX --mode merge --body 'file://api.yaml'
aws apigateway create-deployment --rest-api XXXXXXXXXX --stage-name latest
However I notice that even though the endpoint was added, documentation specific things such as tags and description are not being set and so when we fetch the swagger definition from aws, these keys are omitted.
I put the yaml file I am using with the into https://editor.swagger.io/ and no problems there as well
I don't get any errors when running the above commands. I don't understand why the "merge" process is not finding the swagger keys and applying them.
I figured out that not only do I need to run an update via the put-rest-api command but I also need to publish the documentation(did this via the AWS Console UI and it worked). I haven't found the best command to do so via the aws cli. Will make an edit when I do.
EDIT
I have learned that the aws cli cmd put-rest-api is a precursor for updating the documentation and definition of the REST API, and that the two are deployed via different commands. So I was missing the step:
aws apigateway create-documentation-version --rest-api-id XXXXXXXXX --documentation-version test_version --stage dev
As you may or may not know, you can only deploy documentation once so use
aws apigateway update-stage --stage-name dev --rest-api-id tu2ye61vyg --patch-operations "op=replace,path=/documentationVersion,value=test_version" to deploy an existing version to another stage
on my first aws account I have parameters specified in the following manner:
/config/a => value1
/config/b => value2
/config/c/a => value31
/config/c/b => value32
I want to move these to my second aws account.
I created these parameters in the parameter store manually.
How could I easily copy these values from one account to the other?
Using aws ssm get-parameters --names "<param-name>" would be a bit too difficult, since I have way too many parameters.
Retrieve all parameters via aws ssm get-parameters-by-path --path "/relative/path/" --recursive
Write the resulting JSON somewhere down - e.g. into a file
Prepare put commands e.g. with JS
for (const value of params.Parameters) {
const { Name, Value } = value;
console.log(`aws ssm put-parameter --name "${Name}" --value "${Value}" --type "String"`);
}
I created a utility which does exactly what you want:
pip install aws-ssm-copy
aws-ssm-copy --dry-run --source-profile <source> --recursive /
Checkout the aws-ssm-copy utility and blog for more details.
May be get-parameters-by-path suits here:
aws ssm get-parameters-by-path --path "/" --recursive
https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameters-by-path.html#synopsis
Here is my version that outputs all parameters' Name, Type and Value in a TSV (tab-separated values) format:
aws ssm get-parameters-by-path --path "/" --recursive --query="Parameters[*].[Name, Type, Value]" --output text
Example response:
/prod/aaa String xxx
/prod/bbb String yyy
/prod/ccc String zzz
well I know it is has been a year but, for people who are still trying to figure out here is the detailed solution,
So you need to run following command to fetch all the parameters in your current region:
aws ssm get-parameters-by-path --path "/" --recursive --with-decryption --region eu-west-2
you will get a JSON formatted response. Just copy the response and paste it into a file (*.txt file then rename it to *.json). You have your JSON file with all the current parameters
I published that code into a git repository here. Just clone that repository after cloning add your desired region here :
const ssm = new AWS.SSM({
apiVersion: '2014-11-06';,
region: 'eu-west-2'; // add your destination region here.
});
and your json file here: const { Parameters } = await require('<YOUR JSON FILE>.json');
Then Install npm packages by running command npm install and run the script by command npm start
Using following cmd you can easily get Name and values of parameters store.
$ aws ssm get-parameters-by-path --path "/" --recursive --query="Parameters[*].[Name, Value]" --output json>parameters.json
I'm using
GitBash v2.17.0
AWS CLI v1.16.67
Windows 10
Problem
I've created a SecureString parameter in the AWS SSM Parameter Store. For sake of example, let's call the parameter
/levelOne/levelTwo
I'm trying to retrieve the parameter using the AWS CLI. To do this I am using the following command:
aws ssm get-parameters --names '/levelOne/LevelTwo' --with-decryption
The problem is that the result returned is this:
As you can see, the parameter is being prefixed with C:/Program Files/Git.
Can anyone explain what I have done wrong please?
Thanks
This is caused by POSIX path conversion in MinGW.
You can work around this by substituting // for the leading /, and then replacing the subsequent forward slashes with backslashes, e.g.
aws ssm get-parameters --names '//levelOne\levelTwo'
This command will only run correctly in MinGW, i.e. it will fail in Bash or Windows CMD.
I faced the same issue.
Check the region selected while you create the parameter store from the console.
The reason for this is that Aws-ssm is regional service.
aws ssm get-parameters --names "/levelOne/LevelTwo" --region us-west-1 --with-decryption
i got it working by adding a space in front of the names parameter value. To get it working os independent.
aws ssm get-parameters --names " /levelOne/LevelTwo" --with-decryption
I am attempting to setup some custom CloudWatch metrics using mon-put-data from within my AWS EC2 instance. According to the documentation I am using it correctly.
mon-put-data --namespace Layer --metric-name ResponseTime --dimensions "app=AppName" --value 2
However, when I run it I get the following error:
mon-put-data: Malformed input-Bad credentials in file: /user/.aws/credentials [keyId: null | secretKey null]
The Format of the credentials file is below and was auto generated using aws configure
[default]
aws_access_key_id = KJHJKHJKHJKHJKHJKHJK
aws_secret_access_key = KHKJJKHJKHJKHJH123123kjhjkhjk12312
I have also confirm that the AWS_CREDENTIAL_FILE path exists and is correct. Also, I have confirmed that the IAM User has full access to CloudWatch and EC2.
Can someone please tell me what I am doing wrong?
I managed to get it working with the addition of the -I and -S options. Not really ideal have the credentials inline, but it works for now.
mon-put-data -I <Key ID> -S <Secret Key> --namespace Layer --metric-name ResponseTime --dimensions "app=AppName" --value 2
Obviously mon-put-data command uses a credential file that has a different format to the one created by AWS CLI. Unfortunately there is nothing in the documentation to define it and I can't find the code to debug it.
I originally misread your question and thought you were using the actual AWS CLI tool, which uses the INI-style format like you posted:
[default]
aws_access_key_id = KJHJKHJKHJKHJKHJKHJK
aws_secret_access_key = KHKJJKHJKHJKHJH123123kjhjkhjk12312
However, when you use mon-put-data, it doesn't follow any of the configuration or options from the CLI.
For the service-specific CLIs (like Cloudwatch tools), you have to setup the tool as detailed on this page.
You have to generate a file of this format:
AWSAccessKeyId=<Write your AWS access ID>
AWSSecretKey=<Write your AWS secret key>
Then you have to pass --aws-credential-file as your argument, or set the environment variable AWS_CREDENTIAL_FILE.
If you were using the standard all-in-one AWS CLI, you could do the exact same thing as mon-put-data by using aws.cloudwatch.put-metric-data.