ansible environment variables error when connecting to aws - amazon-web-services

I am trying to execute playbook for stopping ec2 instances and other playbooks
when i execute a playbook i get the following error
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "ec2_url": null, "key_material": null, "name": "ansible-sae", "profile": null, "region": "us-east-1", "security_token": null, "state": "present", "validate_certs": true, "wait": false, "wait_timeout": "300"}, "module_name": "ec2_key"}, "msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"}
i have added the environment variables in my .bashrc file but still i am getting the error my .bashrc file, but when i include the aws access key and secret key in playbook it's executing with out error i have given poweruser access to the credentials i have provided and i can see env variables when i open .bashrc meaning i have saved env. variables correctly i am not able to understand why i got this error
you can see the aws acces key and secret access key variable:
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging
feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
export AWS_ACCESS_KEY_ID='XXXXXXXXXXXX'
export AWS_SECRET_ACCESS_KEY='XXXXXXXXXXXXXXX'
and the playbook would be something like
Playbook format
- hosts: local
connection: local
remote_user: ansible_user
become: yes
gather_facts: no
tasks:
- name: Create a new key pair
ec2_key:
name: ansible-sae
region: us-east-1
state: present
When i put the same creds in playbook it works
Ansible version 2.1.0.0, rhel 7.2(maipo)

i was going through git and found it was a bug, seems like many people were having this problem.
https://github.com/ansible/ansible/issues/10638

Related

Cannot use AWS SSO credentials with CDK

Since PR: https://github.com/aws/aws-cdk/pull/19454 and release v2.18.0 CDK is supposed to support SSO credentials via the AWS CLI v2 profiles.
However no matter what I do I simply cannot get this to work.
I have created a request for updated documentation in the AWS CDK Issues section, since no official documentation explains how this is supposed to function in practice, and the official documentation still saying it is not supported and to use yawsso: https://github.com/aws/aws-cdk/issues/21314
From going through 4 years of old threads to now I have attempted the following settings with 0 success.
My .aws/config file (sensitive values redacted):
[profile DEV-NN-HSMX]
sso_start_url = https://my-company-url.awsapps.com/start#/
sso_region = eu-central-1
sso_account_name = MY-ACCOUNT
sso_account_id = MY-ACCOUNT-ID
sso_role_name = AdministratorAccess
region = eu-central-1
Running aws sso login --profile "DEV-NN-HSMX" redirects me as expected and I can authenticate with my SSO provider.
Running aws sts get-caller-identity --profile "DEV-NN-HSMX" works as expected and confirms my SSO identity.
Running aws s3 ls --profile "DEV-NN-HSMX" works as expected and shows that the credentials have access.
When attempting to run any CDK commands however. I simply cannot make it work.
AWS CLI version: 2.7.16
AWS CDK version: 2.33.0
I have attempted a combination of all the following, either separately, mixed in all combinations and all at once.
cdk deploy --profile "DEV-NN-HSMX"
Exporting both the $AWS_PROFILE and/or the $CDK_DEFAULT_PROFILE environment variables:
cdk doctor
ℹ️ CDK Version: 2.33.0 (build 859272d)
ℹ️ AWS environment variables:
- AWS_CA_BUNDLE = /home/vscode/certs/cacert.pem
- AWS_PROFILE = DEV-NN-HSMX
- AWS_REGION = eu-central-1
- AWS_STS_REGIONAL_ENDPOINTS = regional
- AWS_NODEJS_CONNECTION_REUSE_ENABLED = 1
- AWS_SDK_LOAD_CONFIG = 1
ℹ️ CDK environment variables:
- CDK_DEFAULT_PROFILE = DEV-NN-HSMX
- CDK_DEFAULT_REGION = eu-central-1
I have tried with a deleted .aws/credentials file as well as one that is just empty.
I have deleted everything in aws\sso\cache and in .aws\cli\cache to make sure no expired credential information remained and then re-authenticated with aws sso login --profile "DEV-NN-HSMX".
If I use yawsso --profiles DEV-NN-HSMX and get temporary credentials into .aws/credentials for my profile, it works fine.
I have been able to bootstrap and deploy without issues using the credential conversion. Proving that from a connection, access rights and bootstrap standpoint everything works as expected.
When using any of the SSO methods as explained above without exporting credentials, I always get the following error message.
cdk deploy --profile "DEV-NN-HSMX"
✨ Synthesis time: 4.18s
Unable to resolve AWS account to use. It must be either configured when you define your CDK Stack, or through the environment
Running the command with full verbosity gives this output:
cdk deploy --trace --verbose --profile "DEV-NN-HSMX"
CDK toolkit version: 2.33.0 (build 859272d)
Command line arguments: {
_: [ 'deploy' ],
trace: true,
verbose: 1,
v: 1,
profile: 'DEV-NN-HSMX',
defaultProfile: 'DEV-NN-HSMX',
defaultRegion: 'eu-central-1',
lookups: true,
'ignore-errors': false,
ignoreErrors: false,
json: false,
j: false,
debug: false,
ec2creds: undefined,
i: undefined,
'version-reporting': undefined,
versionReporting: undefined,
'path-metadata': true,
pathMetadata: true,
'asset-metadata': true,
assetMetadata: true,
'role-arn': undefined,
r: undefined,
roleArn: undefined,
staging: true,
'no-color': false,
noColor: false,
ci: false,
all: false,
'build-exclude': [],
E: [],
buildExclude: [],
execute: true,
force: false,
f: false,
parameters: [ {} ],
'previous-parameters': true,
previousParameters: true,
logs: true,
'$0': '/home/vscode/.local/state/fnm_multishells/216_1658735050827/bin/cdk'
}
cdk.json: {
"app": "npx ts-node --prefer-ts-exts bin/cdk-demo.ts",
"watch": {
"include": [
"**"
],
"exclude": [
"README.md",
"cdk*.json",
"**/*.d.ts",
"**/*.js",
"tsconfig.json",
"package*.json",
"yarn.lock",
"node_modules",
"test"
]
},
"context": {
"#aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId": true,
"#aws-cdk/core:stackRelativeExports": true,
"#aws-cdk/aws-rds:lowercaseDbIdentifier": true,
"#aws-cdk/aws-lambda:recognizeVersionProps": true,
"#aws-cdk/aws-lambda:recognizeLayerVersion": true,
"#aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021": true,
"#aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true,
"#aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true,
"#aws-cdk/core:checkSecretUsage": true,
"#aws-cdk/aws-iam:minimizePolicies": true,
"#aws-cdk/core:validateSnapshotRemovalPolicy": true,
"#aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true,
"#aws-cdk/aws-s3:createDefaultLoggingPolicy": true,
"#aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true,
"#aws-cdk/core:target-partitions": [
"aws",
"aws-cn"
]
}
}
merged settings: {
versionReporting: true,
pathMetadata: true,
output: 'cdk.out',
app: 'npx ts-node --prefer-ts-exts bin/cdk-demo.ts',
watch: {
include: [ '**' ],
exclude: [
'README.md',
'cdk*.json',
'**/*.d.ts',
'**/*.js',
'tsconfig.json',
'package*.json',
'yarn.lock',
'node_modules',
'test'
]
},
context: {
'#aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId': true,
'#aws-cdk/core:stackRelativeExports': true,
'#aws-cdk/aws-rds:lowercaseDbIdentifier': true,
'#aws-cdk/aws-lambda:recognizeVersionProps': true,
'#aws-cdk/aws-lambda:recognizeLayerVersion': true,
'#aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021': true,
'#aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver': true,
'#aws-cdk/aws-ec2:uniqueImdsv2TemplateName': true,
'#aws-cdk/core:checkSecretUsage': true,
'#aws-cdk/aws-iam:minimizePolicies': true,
'#aws-cdk/core:validateSnapshotRemovalPolicy': true,
'#aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName': true,
'#aws-cdk/aws-s3:createDefaultLoggingPolicy': true,
'#aws-cdk/aws-sns-subscriptions:restrictSqsDescryption': true,
'#aws-cdk/core:target-partitions': [ 'aws', 'aws-cn' ]
},
debug: false,
assetMetadata: true,
profile: 'DEV-NN-HSMX',
toolkitBucket: {},
staging: true,
bundlingStacks: [ '*' ],
lookups: true
}
Using CA bundle path: /home/vscode/certs/cacert.pem
Toolkit stack: CDKToolkit
Setting "CDK_DEFAULT_REGION" environment variable to eu-central-1
Resolving default credentials
Could not refresh notices: Error: unable to get local issuer certificate
Unable to determine the default AWS account: ProcessCredentialsProviderFailure: Profile DEV-NN-HSMX did not include credential process
at ProcessCredentials2.load (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials/process_credentials.js:102:11)
at ProcessCredentials2.coalesceRefresh (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials.js:205:12)
at ProcessCredentials2.refresh (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials/process_credentials.js:163:10)
at ProcessCredentials2.get2 [as get] (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials.js:122:12)
at resolveNext2 (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials/credential_provider_chain.js:125:17)
at /home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials/credential_provider_chain.js:126:13
at /home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials.js:124:23
at /home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials.js:212:15
at processTicksAndRejections (node:internal/process/task_queues:78:11) {
code: 'ProcessCredentialsProviderFailure',
time: 2022-07-25T15:01:41.645Z
}
context: {
'#aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId': true,
'#aws-cdk/core:stackRelativeExports': true,
'#aws-cdk/aws-rds:lowercaseDbIdentifier': true,
'#aws-cdk/aws-lambda:recognizeVersionProps': true,
'#aws-cdk/aws-lambda:recognizeLayerVersion': true,
'#aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021': true,
'#aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver': true,
'#aws-cdk/aws-ec2:uniqueImdsv2TemplateName': true,
'#aws-cdk/core:checkSecretUsage': true,
'#aws-cdk/aws-iam:minimizePolicies': true,
'#aws-cdk/core:validateSnapshotRemovalPolicy': true,
'#aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName': true,
'#aws-cdk/aws-s3:createDefaultLoggingPolicy': true,
'#aws-cdk/aws-sns-subscriptions:restrictSqsDescryption': true,
'#aws-cdk/core:target-partitions': [ 'aws', 'aws-cn' ],
'aws:cdk:enable-path-metadata': true,
'aws:cdk:enable-asset-metadata': true,
'aws:cdk:version-reporting': true,
'aws:cdk:bundling-stacks': [ '*' ]
}
outdir: cdk.out
env: {
CDK_DEFAULT_REGION: 'eu-central-1',
CDK_CONTEXT_JSON: '{"#aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId":true,"#aws-cdk/core:stackRelativeExports":true,"#aws-cdk/aws-rds:lowercaseDbIdentifier":true,"#aws-cdk/aws-lambda:recognizeVersionProps":true,"#aws-cdk/aws-lambda:recognizeLayerVersion":true,"#aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021":true,"#aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver":true,"#aws-cdk/aws-ec2:uniqueImdsv2TemplateName":true,"#aws-cdk/core:checkSecretUsage":true,"#aws-cdk/aws-iam:minimizePolicies":true,"#aws-cdk/core:validateSnapshotRemovalPolicy":true,"#aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName":true,"#aws-cdk/aws-s3:createDefaultLoggingPolicy":true,"#aws-cdk/aws-sns-subscriptions:restrictSqsDescryption":true,"#aws-cdk/core:target-partitions":["aws","aws-cn"],"aws:cdk:enable-path-metadata":true,"aws:cdk:enable-asset-metadata":true,"aws:cdk:version-reporting":true,"aws:cdk:bundling-stacks":["*"]}',
CDK_OUTDIR: 'cdk.out',
CDK_CLI_ASM_VERSION: '20.0.0',
CDK_CLI_VERSION: '2.33.0'
}
✨ Synthesis time: 4.54s
Reading existing template for stack CdkDemoStack.
Reading cached notices from /home/vscode/.cdk/cache/notices.json
Unable to resolve AWS account to use. It must be either configured when you define your CDK Stack, or through the environment
Error: Unable to resolve AWS account to use. It must be either configured when you define your CDK Stack, or through the environment
at SdkProvider.resolveEnvironment (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-cdk/lib/api/aws-auth/sdk-provider.ts:238:13)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at CloudFormationDeployments.prepareSdkFor (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-cdk/lib/api/cloudformation-deployments.ts:432:33)
I do notice the ProcessCredentialsProviderFailure in the output, but this is not very informative on how to solve it.
Anyone have any ideas or input?
It seems like agnostic stacks where you do not put the environment information directly into the stack code does not work with the new SSO integration.
Adding the environment information into the stack code makes it work:
const app = new cdk.App();
new CdkDemoStack(app, 'CdkDemoStack', {
env: { account: process.env.CDK_DEFAULT_ACCOUNT,
region: process.env.CDK_DEFAULT_REGION
},
});

Ansible and GCP Using facts GCP filestore module

EDIT: I can use gcloud but cannot see how to get ip in var.
gcloud filestore instances describe nfsxxxd --project=dxxxxt-2xxx --zone=xxxx-xx-b --format='get(networks.ipAddresses)'
['1xx.x.x.1']
I'am tring to create filestore and mount it in instance.
I facing an issue when trying to get ipadress of this new filestore.
I'am using ansible module and I can see output when using -v in ansible command.
Ansible module filestore:
- name: get info on an instance
gcp_filestore_instance_info:
zone: xxxxx-xxxx-b
project: dxxxxx-xxxxxx
auth_kind: serviceaccount
service_account_file: "/root/dxxxt-xxxxxxx.json"
Ansible output:
ok: [xxxxxx-xxxxxx] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python3"}, "changed": false, "resources": [{"createTime": "2021-03-12T13:40:36.438598373Z", "fileShares": [{"capacityGb": "1024", "name": "nfxxxxx"}], "name": "projects/xxx-xxxxx/locations/xxxxxxx-b/instances/xxxxx-xxx", "networks": [{"ipAddresses": ["1xx.x.x.x"], "modes": ["MODE_IPV4"], "network": "admin", "reservedIpRange": "1xx.x.x.x/29"}], "state": "READY", "tier": "BASIC_HDD"}, {"createTime": "2021-03-10T11:13:00.111631131Z", "fileShares": [{"capacityGb": "1024", "name": "nfsnxxxxx", "nfsExportOptions": [{"accessMode": "READ_WRITE", "ipRanges": ["xxx.xx.xx.xxx"], "squashMode": "NO_ROOT_SQUASH"}]}], "name": "projects/dxxx-xxxxx/locations/xxxxx/instances/innxxxx", "networks": [{"ipAddresses": ["x.x.x.x."], ...
I have tried this but it doesn't works.
Ansible tasks:
- name: print fact filestore
debug:
msg: "{{ansible_facts.resources.createTime}}"
fatal: [nxxxxxxx]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'resources'\n\nThe error appears to be in '/root/xxxxxxx/tasks/main.yml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: print fact filestore\n ^ here\n"}
Thanks
If I believe the example output from your answer, the info is returned in a resources key in your task result. I cannot test myself, but I believe the following should meet your expectation.
Please note that resources is a list of dicts. In my below example I will access the info from the first element of the list. If you need someting else (e.g. list of all createTime...) or to loop over those objects, you can extend from this example.
- name: get info on an instance
gcp_filestore_instance_info:
zone: xxxxx-xxxx-b
project: dxxxxx-xxxxxx
auth_kind: serviceaccount
service_account_file: "/root/dxxxt-xxxxxxx.json"
register: instance_info
- name: show create time for first resource
debug:
msg: "{{ instance_info.resources.0.createTime }}"
- name: show first ip of first network of first resource
debug:
msg: "{{ instance_info.resources.0.networks.0.ipAddresses.0 }}"

Creation GCP ressource and get IP adresse

I must create new nexus server on GCP. I have decided to use nfs point for datastorage. All must be done with ansible ( instance is already created with terraform)
I must get the dynamic IP setted by GCP and create the mount point.
It's working fine with gcloud command, but how to get only IP info ?
Code:
- name: get info
shell: gcloud filestore instances describe nfsnexus --project=xxxxx --zone=xxxxx --format='get(networks.ipAddresses)'
register: ip
- name: Print all available facts
ansible.builtin.debug:
msg: "{{ip}}"
result:
ok: [nexus-ppd.preprod.d-aim.com] => {
"changed": false,
"msg": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": true,
"cmd": "gcloud filestore instances describe nfsnexus --project=xxxxx --zone=xxxxx --format='get(networks.ipAddresses)'",
"delta": "0:00:00.763235",
"end": "2021-03-14 00:33:43.727857",
"failed": false,
"rc": 0,
"start": "2021-03-14 00:33:42.964622",
"stderr": "",
"stderr_lines": [],
"stdout": "['1x.x.x.1xx']",
"stdout_lines": [
"['1x.x.x.1xx']"
]
}
}
Thanks
Just use the proper format string, eg. to get the first IP:
--format='get(networks.ipAddresses[0])'
Find solution just add this:
- name:
debug:
msg: "{{ip.stdout_lines}}"
I'am feeling so stupid :(, I must stop to work after 2h AM :)
Thx

Ansible failed to run shell module on sbin folder

I ran Ansible Playbook on specific host:
When I execute for example iptables -L command from Ansible I got this error:
changed: [host] => {"changed": true, "cmd": "iptables -L", "delta": "0:00:00.018402", "end": "2020-04-26 09:33:11.274857", "failed_when_result": false, "msg": "non-zero return code", "rc": 127, "start": "2020-04-26 09:33:11.256455", "stderr": "/bin/sh: iptables: command not found", "stderr_lines": ["/bin/sh: iptables: command not found"], "stdout": "", "stdout_lines": []}
Example to playbook:
---
- hosts: all
gather_facts: no
tasks:
- name: ls
shell: tuned -v
args:
executable: /usr/sbin
- name: iptables flush filter
iptables:
chain: "{{ item }}"
flush: yes
with_items: [ 'INPUT', 'FORWARD', 'OUTPUT' ]
- name: Get iptables rules | No resilience comment
command: iptables -L
become: yes
args:
executable: /sbin
Inventory file:
[hosts]
host
[all:vars]
ansible_user=ansible_user
ansible_become_user=root
ansible_ssh_pass=pass
ansible_become=yes
but the iptables is installed on the machine.
I check more command and i got that all the commands in /sbin folder not found.
What the reason ?!
thanks for helping
got that all the commands in /sbin folder not found. What the reason
Usual reason $PATH variable, which does not include /sbin location. The simplest solution is to use full path to binary you want to run, so instead of attempting to invoke iptables you need to use /sbin/iptables.
Alternatively, which may look like better solution as it does not require you to hardcode paths nor edit anything, you can set own $PATH for the whole playbook, as documented in Ansible FAQ:
environment:
PATH: "{{ ansible_env.PATH }}:/thingy/bin"
OTHER_ENV_VAR: its_new_value
Note the above example appends /thingy/bin path to existing value of $PATH. You may want to add it first, or replace existing PATH completely if needed though. Also note that ansible_env is normally populated by fact gathering (thus you must not disable it) and the value of the variables depends on the user that did the gathering action. If you change remote_user or become_user you might end up using the wrong/different values for those variables.

Ansible rds_instance does not wait until modifying multi az has been finished

Hi together,
I need to shut down RDS instances. However, when the RDS instance has a Multi-AZ deployment it is not possible to stop them. Hence, it is necessary to modify deployment to a none Multi-AZ deyploment. Then, I thought, I should be able to stop the instance.
When finally starting the instance again, after it is available it should modified to a Multi-AZ deployment.
However, I struggle with this very ansible playbook which is executed within a Jenkins Pipeline since it does not "wait" until the modification has been successfully conducted and RDS state is "available".
Here are the files
### vars/rds.yml
my_rds_state:
running:
name: started
description: Starting
multiZone: true
stopping:
name: stopped
description: Stopping
multiZone: false
### manage_rds.yml
---
- hosts: localhost
vars:
rdsState: "{{instanceState}}"
rdsIdentifier: "{{identifier}}"
tasks:
- name: Include vars
include_vars: rds.yml
- import_tasks: tasks/task_modify_rds.yml
when: rdsState == "stopping"
- debug:
var: my_rds_state
- import_tasks: tasks/task_state_rds.yml
- import_tasks: tasks/task_modify_rds.yml
when: rdsState == "running
### tasks/task_modify_rds.yml
- name: Modify RDS deployment
rds_instance:
db_instance_identifier: "{{rdsIdentifier}}"
apply_immediately: yes
multi_az: "{{my_rds_state[rdsState].multiZone | bool}}"
state: "{{my_rds_state[rdsState].name}}"
The my_rds_state value is:
my_rds_state:
ok: [localhost] => {
"my_rds_state": {
"running": {
"description": "Starting",
"multiZone": false,
"name": "started"
},
"stopping": {
"description": "Stopping",
"multiZone": true,
"name": "stopped"
}
}
}
Furthermore, console output looks like:
TASK [Modify RDS deployment] **********************************************
changed: [localhost]
TASK [Stopping RDS instances] **************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidDBInstanceStateFault: An error occurred (InvalidDBInstanceState) when calling the StopDBInstance operation: Cannot stop or start a SQLServer MultiAz database instance
fatal: [localhost]: FAILED! => {"boto3_version": "1.11.8", "botocore_version": "1.14.8", "changed": false, "error": {"code": "InvalidDBInstanceState", "message": "Cannot stop or start a SQLServer MultiAz database instance", "type": "Sender"}, "msg": "Unable to stop DB instance: An error occurred (InvalidDBInstanceState) when calling the StopDBInstance operation: Cannot stop or start a SQLServer MultiAz database instance", "response_metadata": {"http_headers": {"connection": "close", "content-length": "311", "content-type": "text/xml", "date": "Tue, 25 Feb 2020 00:01:26 GMT", "x-amzn-requestid": "215571e3-12b6-4b1f-b640-587f3e1686fe"}, "http_status_code": 400, "request_id": "215571e3-12b6-4b1f-b640-587f3e1686fe", "retry_attempts": 0}}
Any ideas what the problem might be why ansible does not wait?
I have found the solution by myself.
Since triggering an action that causes the state to change into "modifying" is an asynchronous operation I had to use a kind of waiter.
- name: Wait until the DB instance status changes to 'modifying'
rds_instance_info:
db_instance_identifier: "{{rdsIdentifier}}"
register: database_info
until: database_info.instances[0].db_instance_status == "modifying"
retries: 18
delay: 10
when:
- rds_instance_info.db_instance_status != "modifying"