Use SSM Document to check a particular application installation - amazon-web-services

I'm trying to use AWS SSM Document RunPowerShellScript action to check if a particular application is installed on Windows servers. The PowerShell script is very simple, but Doucment validation keeps failing.
The PowerShell script does contain a registry path, which does contain columns and back slashes. I suspect this may contribute to the problem. Tried with changing all the back slashes to forward slashes with no luck.
schemaVersion: "2.2"
description: "Command Document to check if This Software is installed"
mainSteps:
- action: "aws:runPowerShellScript"
name: "CheckThisSoftware"
inputs:
runCommand:
- "$ResultMsg = (Get-ItemProperty HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\*).DisplayName -Contains 'Software Name Here'",
- "Write-Output $ResultMsg"
Keep getting InvalidDocumentContent: null while tries to submit the document.

I fixed this by escaping my special characters (\)

Related

Force AWS account numbers that start with "00" to string

Does anybody know a work-around for converting account numbers that start with “00” to string? I am using Mappings in CFn template to assign values based on the account number. I put the account number in quotes for converting it to string and it works well if it does not start with a zero, and I get the following error when it does.:
[/Mappings/EnvMap] map keys must be strings; received numeric [1.50xxx028E9]
Mappings:
EnvMap:
"8727xxxx0":
env: "dev"
"707xxxx78":
env: "test"
"00150xxx280":
env: "prod"
Resources:
rS3Stack:
Type: "AWS::CloudFormation::Stack"
Properties:
TemplateURL: "https://s3.amazonaws.com/some_bucket/nested_cfn/s3.yaml"
Parameters:
pEnvironment: !FindInMap
- EnvMap
- !Ref 'AWS::AccountId'
- env
Your problem is caused by a bug in PyYAML which results from some ambiguity in the YAML 1.1 specification. According to YAML 1.1 an integer must not start with 0 and numbers starting with 0 and are considered octal numbers. So when PyYAML parses the account id it considers the account id not to be an integer, because it's starting with 0, but also not an octal number, because it contains an 8. As it's neither an integer, nor an octal number, PyYAML considers it a string, which is safe to get dumped without surrounding quotes.
A minimal example to reproduce this looks like this:
>>> import sys
>>> import yaml
>>> yaml.dump(["1", "8", "01", "08"], sys.stdout)
- '1'
- '8'
- '01'
- 08
Now you might wonder why a PyYAML bug is mentioned here, while you just want to deploy a CloudFormation stack:
Depending on how you deploy a CloudFormation stack the template might get transformed locally, before it gets deployed. That happens for example when using aws cloudformation package, sam package or sam build to replace local code locations with paths in S3. As reading and writing the template during those transformations is done using PyYAML, it triggers the PyYAML bug mentioned above. There are bug reports for the AWS CLI and the AWS SAM CLI regarding this problem.
As the account id causing the problem is used as a key in your case, your options to work around that problem are limited, as you can't utilize CloudFormation's intrinstic functions. However there are still possible workarounds:
If you're using the AWS CLI, you can switch to using the AWS CLI v2, which doesn't suffer from this bug as it uses ruamel instead of PyYAML. ruamel handles numbers as one would expect, as it implements YAML 1.2, which doesn't contain the ambiguity in its specification.
What you can use no matter if you're using the AWS SAM CLI or the AWS CLI is to convert the transformed template from YAML to JSON and back to YAML which "fixes" that bug as well, as it results in problematic numbers being quoted again. There is a tool called cfn-flip from AWS to do so. You'd have to run this double-flip between packaging and deployment. For the AWS SAM CLI that'd for example look like:
sam build
cfn-flip .aws-sam/build/template.yaml | cfn-flip > .aws-sam/build/template.tmp.yaml
mv .aws-sam/build/template.tmp.yaml .aws-sam/build/template.yaml
sam deploy
With this said, I personally would suggest a completely different workaround and that's to remove that mapping from the template. Hardcoding account ids and environments makes a template less portable, as it limits the accounts/environments this template can be used for. I'd instead provide the environment as a parameter to the CloudFormation template, so it doesn't need to be aware of account ids at all.

Docker AWS ECR error parsing HTTP 404 response body: invalid character 'p' after top-level value: "404 page not found\n"

Had an issue with not being able to push or pull from an AWS ECR registry with the following cryptic error:
error parsing HTTP 404 response body: invalid character 'p' after top-level value: "404 page not found\n"
Several hours of googling indicated it was a protocol issue. It turns out the image name:
xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/snowshu__test
was the issue: AWS ECR errors when the image name contains double underscores.
This contradicts ECR naming documentation.
You cannot have two underscores next to each other in a repository name.
As per the Docker Registry API:
A component of a repository name must be at least one lowercase, alpha-numeric characters, optionally separated by periods, dashes or underscores. More strictly, it must match the regular expression [a-z0-9]+(?:[._-][a-z0-9]+)*.
Renaming the image to
xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/snowshu_test
solved the issue.

How to remove a service account key on GCP using Ansible Playbook?

I am using an Ansible playbook to run certain modules that create service accounts and their respective keys. The code used to generate this is as found on the Ansible documentation:
- name: create a service account key
gcp_iam_service_account_key:
service_account: "{{ serviceaccount }}"
private_key_type: TYPE_GOOGLE_CREDENTIALS_FILE
path: "~/test_account.json"
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
state: present
Now I am trying to remove the service account key, so I changed the state value from present to absent but that doesn't seem to do much, wanted to ask if I'm missing something or if there is anything else I could try?
I'm not sure if it could be possible since I couldn't find the module on the ansible documentation, but in the deletion for instances examples, I see that after the absent state they use a tag for the deletion, it could be a way to do it for the SA. e.g.
state: absent
tags:
- delete
Other way that could be useful is to directly do the request to the REST API, e.g.
DELETE https://iam.googleapis.com/v1/projects/[PROJECT-ID]/serviceAccounts/[SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com/keys/[KEY-ID]
I can confirm that it works when changing state from present to absent in version 1.0.2 of the google.cloud collection.
I believe that you expect the file in path: "~/test_account.json" to be deleted but in fact the key is deleted on the service account in GCP. You will have to delete the file yourself after the task has completed successfully.

Digitalocean spaces with Elixir

I'm trying to find a aws-client for elixir that can be used with digitalocean spaces.
I tried aws-elixir (since it allowed different endpoint), but I can't find a way to do S3 operations.
I ask,
How do I handle S3 bucket from aws-elixir?
If aws-elixir doesn't work, What's the best solution for my situation?
aws-elixir does not support S3 unfortunately, but ExAws does. In order to use ExAws, you first need to add these dependencies in your mix.exs file:
defp deps() do
[
{:ex_aws, "~> 2.0"},
{:ex_aws_s3, "~> 2.0"},
{:poison, "~> 3.0"},
{:hackney, "~> 1.9"},
{:sweet_xml, "~> 0.6"},
]
end
Note that both ex_aws and ex_aws_s3 need to be added to your dependencies. hackney is an HTTP client, poison is for JSON parsing, and sweet_xml is for XML parsing.
Now that you added the dependencies, next you need to configure S3 to connect to DigitalOcean spaces instead.
Type this into your config.exs file:
config :ex_aws, :s3,
%{
access_key_id: "access key",
secret_access_key: "secret key",
scheme: "https://",
host: %{"sfo2" => "your-space-name.sfo2.digitaloceanspaces.com"},
region: "sfo2"
}
"access key" and "secret key" need to be replaced with the actual keys you get from DigitalOcean.
Please make sure to replace "sfo2" with the actual Spaces region you're using. And of course, put your actual space name instead of your-space-name.
Don't forget to run mix deps.get, and you're all set.
You can start an iex session and verify that all is working, by running iex -S mix, and then typing:
ExAws.S3.list_objects("bucket") |> ExAws.request!

aws ec2 request-spot-instances CLI issues

Trying to start a couple of spot instances within a simple script, and the syntax supplied in the AWS documentation and aws ec2 request-spot-instances help output is listed in either JAVA or JSON syntax. How does one enter the parameters under the JSON syntax from inside a shell script?
aws --version
aws-cli/1.2.6 Python/2.6.5 Linux/2.6.21.7-2.fc8xen
aws ec2 request-spot-instances help
-- at the start of "launch specification" it lists JSON syntax
--launch-specification (structure)
Specifies additional launch instance information.
JSON Syntax:
{
"ImageId": "string",
"KeyName": "string",
}, ....
"EbsOptimized": true|false,
"SecurityGroupIds": ["string", ...],
"SecurityGroups": ["string", ...]
}
I have tried every possible combination of the following, adding & moving brackets, quotes, changing options, etc, all to no avail. What would be the correct formatting of the variable $launch below to have this work? Other command variations -- "ec2-request-spot-instances" are not working in my environment, nor does it work if I try to substitute --spot-price with -p.
#!/bin/bash
launch="{"ImageId":"ami-a999999","InstanceType":"c1.medium"} "SecurityGroups":"launch-wizard-6""
echo $launch
aws ec2 request-spot-instances --spot-price 0.01 --instance-count 1 --type c1.small --launch-specification $launch
This provides result:
Unknown options: SecurityGroups:launch-wizard-6
Substituting the security group number has the same result.
aws ec2 describe-instances works perfectly, as does aws ec2 start-instance, so the environment and account information are properly setup, but I need to utilize spot pricing.
In fact, nothing is working as listed in this user documentation: http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RequestSpotInstances.html
Thank you,
I know this is an old question, but in case somebody runs into it. I had the same issue recently with the CLI. It was very hard to get all the parameters to work correctly for request-spot-instances
#!/bin/bash
AWS_DEFAULT_OUTPUT="text"
UserData=$(base64 < userdata-current)
region="us-west-2"
price="0.03"
zone="us-west-2c"
aws ec2 request-spot-instances --region $region --spot-price $price --launch-specification "{ \"KeyName\": \"YourKey\", \"ImageId\": \"ami-3d50120d\" , \"UserData\": \"$UserData\", \"InstanceType\": \"r3.large\" , \"Placement\": {\"AvailabilityZone\": \"$zone\"}, \"IamInstanceProfile\": {\"Arn\": \"arn:aws:iam::YourAccount:YourProfile\"}, \"SecurityGroupIds\": [\"YourSecurityGroupId\"],\"SubnetId\": \"YourSubnectId\" }"
Basically what I had to do is put my user data in an external file, load it into the UserData variable and then pass that on the command line. Trying to get everything on the command line or using an external file for the ec2-request-spot-instances just kept failing. Note that other commands worked just fine, so this is specific to the ec2-request-spot-instances.
I detailed more about what i ended up doing here.
You have to use a list in this case:
"SecurityGroups": ["string", ...]
so
"SecurityGroups":"launch-wizard-6"
become
"SecurityGroups":["launch-wizard-6"]
Anyway, I'm dealing with the CLI right now and I found more useful to use a external JSON
Here is an example using Python:
myJson="file:///Users/xxx/Documents/Python/xxxxx/spotInstanceInformation.json"
x= subprocess.check_output(["/usr/local/bin/aws ec2 request-spot-instances --spot-price 0.2 --launch-specification "+myJson],shell=True)
print x
And the output is:
"SpotInstanceRequests": [
{
"Status": {
"UpdateTime": "2013-12-09T02:41:41.000Z",
"Code": "pending-evaluation",
"Message": "Your Spot request has been submitted for review, and is pending evaluation."
etc etc ....
Doc is here : http://docs.aws.amazon.com/cli/latest/reference/ec2/request-spot-instances.html
FYI - I'm appending file:/// because I'm using MAC. If you are launching your bash script using Linux, you could just use myJson="/path/to/file/"
The first problem, here, is quoting and formatting:
$ launch="{"ImageId":"ami-a999999","InstanceType":"c1.medium"} "SecurityGroups":"launch-wizard-6""
This isn't going to generate valid JSON, because the block you copied from the help file includes a spurious closing brace from a nested object that you didn't include, the closing brace is missing, and the unescaped double quotes are disappearing.
But we're not really getting to the point where the json is actually being validated, because with that space after the last brace, the cli is assuming that SecurityGroups and launch-wizard-6 are more command line options following the argument to --launch-specification:
$ echo $launch
{ImageId:ami-a999999,InstanceType:c1.medium} SecurityGroups:launch-wizard-6
That's probably not what you expected... so we'll fix the quoting so that it looks like one long argument, after the json is valid:
From the perspective of just generating valid json structures (not necessarily content), the data you are most likely trying to send would actually look like this, based on the docs:
{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}
Check that as structurally valid JSON, here.
Fixing the bracing, commas, and bracketing, the CLI stops throwing that error, with this formatting:
$ launch='{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}'
$ echo $launch
{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}
That isn't to say the API might not subsequently reject the request due to something else incorrect or missing, but you were never actually getting to the point of sending anything to the API; this was failing local validation in the command line tools.