Creation GCP ressource and get IP adresse - google-cloud-platform

I must create new nexus server on GCP. I have decided to use nfs point for datastorage. All must be done with ansible ( instance is already created with terraform)
I must get the dynamic IP setted by GCP and create the mount point.
It's working fine with gcloud command, but how to get only IP info ?
Code:
- name: get info
shell: gcloud filestore instances describe nfsnexus --project=xxxxx --zone=xxxxx --format='get(networks.ipAddresses)'
register: ip
- name: Print all available facts
ansible.builtin.debug:
msg: "{{ip}}"
result:
ok: [nexus-ppd.preprod.d-aim.com] => {
"changed": false,
"msg": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": true,
"cmd": "gcloud filestore instances describe nfsnexus --project=xxxxx --zone=xxxxx --format='get(networks.ipAddresses)'",
"delta": "0:00:00.763235",
"end": "2021-03-14 00:33:43.727857",
"failed": false,
"rc": 0,
"start": "2021-03-14 00:33:42.964622",
"stderr": "",
"stderr_lines": [],
"stdout": "['1x.x.x.1xx']",
"stdout_lines": [
"['1x.x.x.1xx']"
]
}
}
Thanks

Just use the proper format string, eg. to get the first IP:
--format='get(networks.ipAddresses[0])'

Find solution just add this:
- name:
debug:
msg: "{{ip.stdout_lines}}"
I'am feeling so stupid :(, I must stop to work after 2h AM :)
Thx

Related

Ansible and GCP Using facts GCP filestore module

EDIT: I can use gcloud but cannot see how to get ip in var.
gcloud filestore instances describe nfsxxxd --project=dxxxxt-2xxx --zone=xxxx-xx-b --format='get(networks.ipAddresses)'
['1xx.x.x.1']
I'am tring to create filestore and mount it in instance.
I facing an issue when trying to get ipadress of this new filestore.
I'am using ansible module and I can see output when using -v in ansible command.
Ansible module filestore:
- name: get info on an instance
gcp_filestore_instance_info:
zone: xxxxx-xxxx-b
project: dxxxxx-xxxxxx
auth_kind: serviceaccount
service_account_file: "/root/dxxxt-xxxxxxx.json"
Ansible output:
ok: [xxxxxx-xxxxxx] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python3"}, "changed": false, "resources": [{"createTime": "2021-03-12T13:40:36.438598373Z", "fileShares": [{"capacityGb": "1024", "name": "nfxxxxx"}], "name": "projects/xxx-xxxxx/locations/xxxxxxx-b/instances/xxxxx-xxx", "networks": [{"ipAddresses": ["1xx.x.x.x"], "modes": ["MODE_IPV4"], "network": "admin", "reservedIpRange": "1xx.x.x.x/29"}], "state": "READY", "tier": "BASIC_HDD"}, {"createTime": "2021-03-10T11:13:00.111631131Z", "fileShares": [{"capacityGb": "1024", "name": "nfsnxxxxx", "nfsExportOptions": [{"accessMode": "READ_WRITE", "ipRanges": ["xxx.xx.xx.xxx"], "squashMode": "NO_ROOT_SQUASH"}]}], "name": "projects/dxxx-xxxxx/locations/xxxxx/instances/innxxxx", "networks": [{"ipAddresses": ["x.x.x.x."], ...
I have tried this but it doesn't works.
Ansible tasks:
- name: print fact filestore
debug:
msg: "{{ansible_facts.resources.createTime}}"
fatal: [nxxxxxxx]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'resources'\n\nThe error appears to be in '/root/xxxxxxx/tasks/main.yml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: print fact filestore\n ^ here\n"}
Thanks
If I believe the example output from your answer, the info is returned in a resources key in your task result. I cannot test myself, but I believe the following should meet your expectation.
Please note that resources is a list of dicts. In my below example I will access the info from the first element of the list. If you need someting else (e.g. list of all createTime...) or to loop over those objects, you can extend from this example.
- name: get info on an instance
gcp_filestore_instance_info:
zone: xxxxx-xxxx-b
project: dxxxxx-xxxxxx
auth_kind: serviceaccount
service_account_file: "/root/dxxxt-xxxxxxx.json"
register: instance_info
- name: show create time for first resource
debug:
msg: "{{ instance_info.resources.0.createTime }}"
- name: show first ip of first network of first resource
debug:
msg: "{{ instance_info.resources.0.networks.0.ipAddresses.0 }}"

ansible regex_replace in command

I'am using ansible to deploy filestore on GCP, I need to get IP from instance and use it to create mount point.
gcloud working fine but it's return bracket and simple quote with ip.
Someone can help me to remove these caractere please ? My regex command doesn't work and i'am newbie with regex.
Error tasks mount cannot resolv come from '' in ip.stdout
ansible code:
- name: get info
shell: gcloud filestore instances describe "{{nfs_id}}" --project=xxxx-xxxx --zone=xxxxx-b --format='get(networks.ipAddresses)'
register: ip
- name: master_setup.yml --> DEBUG REGEX
debug:
var: "{{ 'ip.stdout' | regex_replace('([^\\.]*)\\.(.+)$', '\\1') }}"
- name: print mount point test
debug:
msg: "{{ip.stdout}}:/{{nfs_name }}"
- name: Mount an NFS volume
mount:
fstype: nfs
state: mounted
opts: rw,sync,hard,intr
src: "{{ip.stdout}:/{{nfs_name }}"
path: /mnt/nexus-storage
result of ansible playbook execution
TASK [install_nexus : master_setup.yml --> DEBUG REGEX] ********************************************************
ok: [nexus-xxxx.xxx.xxxxx] => {
"ip": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": true,
"cmd": "ggcloud filestore instances describe "{{nfs_id}}" --project=xxxx-xxxx --zone=xxxxx-b --format='get(networks.ipAddresses)'",
"delta": "0:00:01.013823",
"end": "2021-03-14 21:23:32.398266",
"failed": false,
"rc": 0,
"start": "2021-03-14 21:23:31.384443",
"stderr": "",
"stderr_lines": [],
"stdout": "['1xx.xxx.xx.2']",
"stdout_lines": [
"['1xx.xxx.xx.2']"
]
}
}
TASK [install_nexus : print mount point test] ******************************************************************
ok: [nexus-xxxx.xxx.xxxxx] => {
"msg": "['1xx.xxx.xx.2']:/nfsnexusnew"
}
TASK [install_nexus : Mount an NFS volume] *********************************************************************
[WARNING]: sftp transfer mechanism failed on [nexus-ppd.preprod.d-aim.com]. Use ANSIBLE_DEBUG=1 to see detailed
information
fatal: [nexus-xxxx.xxx.xxxxx]: FAILED! => {"changed": false, "msg": "Error mounting /mnt/nexus-storage: mount.nfs: Failed to resolve server '1xx.xxx.xx.2': Name or service not known\n"}
Thx
Resolved doing this, it's not very sexy but it's working. If someone find a solution please forward me it.
I have used 2 regex because I don't how to remove simple quote and bracket in one line:
- name: get info
shell: gcloud filestore instances describe "{{nfs_id}}" --project=xxxx-xxxx --zone=xxxxx-b --format='get(networks.ipAddresses)' > /tmp/nfs-ip.txt
- name: sed regex to delete []
shell: sed -i 's/[][]//g' /tmp/nfs-ip.txt
- name: sed regex to delete ''
shell: sed -i 's|["'\'']||g' /tmp/nfs-ip.txt
- name: register result in var ip
shell: cat /tmp/nfs-ip.txt
register: ip
- name: Mount an NFS volume
mount:
fstype: nfs
state: mounted
opts: rw,sync,hard,intr
src: "{{ip.stdout}}:/{{nfs_name }}"
path: /mnt/nexus-storage
Q: "Cannot resolve ip.stdout"
A: The value stored in ip.stdout is a string
"ip": {
...
"stdout": "['1xx.xxx.xx.2']",
...
}
Use the filters from_yaml and first to get the first item of the list, e.g.
src: "{{ ip.stdout|from_yaml|first }}:/{{ nfs_name }}"

Ansible rds_instance does not wait until modifying multi az has been finished

Hi together,
I need to shut down RDS instances. However, when the RDS instance has a Multi-AZ deployment it is not possible to stop them. Hence, it is necessary to modify deployment to a none Multi-AZ deyploment. Then, I thought, I should be able to stop the instance.
When finally starting the instance again, after it is available it should modified to a Multi-AZ deployment.
However, I struggle with this very ansible playbook which is executed within a Jenkins Pipeline since it does not "wait" until the modification has been successfully conducted and RDS state is "available".
Here are the files
### vars/rds.yml
my_rds_state:
running:
name: started
description: Starting
multiZone: true
stopping:
name: stopped
description: Stopping
multiZone: false
### manage_rds.yml
---
- hosts: localhost
vars:
rdsState: "{{instanceState}}"
rdsIdentifier: "{{identifier}}"
tasks:
- name: Include vars
include_vars: rds.yml
- import_tasks: tasks/task_modify_rds.yml
when: rdsState == "stopping"
- debug:
var: my_rds_state
- import_tasks: tasks/task_state_rds.yml
- import_tasks: tasks/task_modify_rds.yml
when: rdsState == "running
### tasks/task_modify_rds.yml
- name: Modify RDS deployment
rds_instance:
db_instance_identifier: "{{rdsIdentifier}}"
apply_immediately: yes
multi_az: "{{my_rds_state[rdsState].multiZone | bool}}"
state: "{{my_rds_state[rdsState].name}}"
The my_rds_state value is:
my_rds_state:
ok: [localhost] => {
"my_rds_state": {
"running": {
"description": "Starting",
"multiZone": false,
"name": "started"
},
"stopping": {
"description": "Stopping",
"multiZone": true,
"name": "stopped"
}
}
}
Furthermore, console output looks like:
TASK [Modify RDS deployment] **********************************************
changed: [localhost]
TASK [Stopping RDS instances] **************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidDBInstanceStateFault: An error occurred (InvalidDBInstanceState) when calling the StopDBInstance operation: Cannot stop or start a SQLServer MultiAz database instance
fatal: [localhost]: FAILED! => {"boto3_version": "1.11.8", "botocore_version": "1.14.8", "changed": false, "error": {"code": "InvalidDBInstanceState", "message": "Cannot stop or start a SQLServer MultiAz database instance", "type": "Sender"}, "msg": "Unable to stop DB instance: An error occurred (InvalidDBInstanceState) when calling the StopDBInstance operation: Cannot stop or start a SQLServer MultiAz database instance", "response_metadata": {"http_headers": {"connection": "close", "content-length": "311", "content-type": "text/xml", "date": "Tue, 25 Feb 2020 00:01:26 GMT", "x-amzn-requestid": "215571e3-12b6-4b1f-b640-587f3e1686fe"}, "http_status_code": 400, "request_id": "215571e3-12b6-4b1f-b640-587f3e1686fe", "retry_attempts": 0}}
Any ideas what the problem might be why ansible does not wait?
I have found the solution by myself.
Since triggering an action that causes the state to change into "modifying" is an asynchronous operation I had to use a kind of waiter.
- name: Wait until the DB instance status changes to 'modifying'
rds_instance_info:
db_instance_identifier: "{{rdsIdentifier}}"
register: database_info
until: database_info.instances[0].db_instance_status == "modifying"
retries: 18
delay: 10
when:
- rds_instance_info.db_instance_status != "modifying"

EB Custom Platform without default VPC fails

I'm building up a custom platform to run our application. We have default VPC deleted, so according to the documentation I have to specify the VPC and subnet id almost everywhere. So the command I run for ebp looks like following:
ebp create -v --vpc.id vpc-xxxxxxx --vpc.subnets subnet-xxxxxx --vpc.publicip{code}
The above spins up the pcakcer environment without any issue however when the packer start to build an instance I'm getting the following error:
2017-12-07 18:07:05 UTC+0100 ERROR [Instance: i-00f376be9fc2fea34] Command failed on instance. Return code: 1 Output: 'packer build' failed, the build log has been saved to '/var/log/packer-builder/XXX1.0.19-builder.log'. Hook /opt/elasticbeanstalk/hooks/packerbuild/build.rb failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
2017-12-07 18:06:55 UTC+0100 ERROR 'packer build' failed, the build log has been saved to '/var/log/packer-builder/XXX:1.0.19-builder.log'
2017-12-07 18:06:55 UTC+0100 ERROR Packer failed with error: '--> HVM AMI builder: VPCIdNotSpecified: No default VPC for this user status code: 400, request id: 28d94e8c-e24d-440f-9c64-88826e042e9d'{code}
Both the template and the platform.yaml specify vpc_id and subnet id, however this is not taken into account by packer.
platform.yaml:
version: "1.0"
provisioner:
type: packer
template: tomcat_platform.json
flavor: ubuntu1604
metadata:
maintainer: <Enter your contact details here>
description: Ubuntu running Tomcat
operating_system_name: Ubuntu Server
operating_system_version: 16.04 LTS
programming_language_name: Java
programming_language_version: 8
framework_name: Tomcat
framework_version: 7
app_server_name: "none"
app_server_version: "none"
option_definitions:
- namespace: "aws:elasticbeanstalk:container:custom:application"
option_name: "TOMCAT_START"
description: "Default application startup command"
default_value: ""
option_settings:
- namespace: "aws:ec2:vpc"
option_name: "VPCId"
value: "vpc-xxxxxxx"
- namespace: "aws:ec2:vpc"
option_name: "Subnets"
value: "subnet-xxxxxxx"
- namespace: "aws:elb:listener:80"
option_name: "InstancePort"
value: "8080"
- namespace: "aws:elasticbeanstalk:application"
option_name: "Application Healthcheck URL"
value: "TCP:8080"
tomcat_platform.json:
{
"variables": {
"platform_name": "{{env `AWS_EB_PLATFORM_NAME`}}",
"platform_version": "{{env `AWS_EB_PLATFORM_VERSION`}}",
"platform_arn": "{{env `AWS_EB_PLATFORM_ARN`}}"
},
"builders": [
{
"type": "amazon-ebs",
"region": "eu-west-1",
"source_ami": "ami-8fd760f6",
"instance_type": "t2.micro",
"ami_virtualization_type": "hvm",
"ssh_username": "admin",
"ami_name": "Tomcat running on Ubuntu Server 16.04 LTS (built on {{isotime \"20060102150405\"}})",
"ami_description": "Tomcat running on Ubuntu Server 16.04 LTS (built on {{isotime \"20060102150405\"}})",
"vpc_id": "vpc-xxxxxx",
"subnet_id": "subnet-xxxxxx",
"associate_public_ip_address": "true",
"tags": {
"eb_platform_name": "{{user `platform_name`}}",
"eb_platform_version": "{{user `platform_version`}}",
"eb_platform_arn": "{{user `platform_arn`}}"
}
}
],
"provisioners": [
{
"type": "file",
"source": "builder",
"destination": "/tmp/"
},
{
"type": "shell",
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo {{ .Path }}",
"scripts": [
"builder/builder.sh"
]
}
]
}
Appreciate any idea on how to make this work as expected. I found couple of issues with the Packer, but seems to be resolved on their side so the documentation just says that the template must specify target VPC and Subnet.
The AWS documentation is a little misleading in this instance. You do need a default VPC in order to create a custom platform. From what I've seen, this is because the VPC flags that you are passing in to the ebp create command aren't passed along to the packer process that actually builds the platform.
To get around the error, you can just create a new default VPC that you just use for custom platform creation.
Packer looks for a default VPC (default behavior of Packer) while creating the resources required for building a custom platform which includes launching an EC2 instance, creating a Security Group etc., However, if a default VPC is not present in the region (for example, if it is deleted), Packer Build Task would fail with the following error:
Packer failed with error: '--> HVM AMI builder: VPCIdNotSpecified: No default VPC for this user status code: 400, request id: xyx-yxyx-xyx'
To fix this error, use the following attributes in the "builders" section of the 'template.json' file for packer to use a custom VPC and Subnets while creating the resources :
▸ vpc_id
▸ subnet_id

ansible environment variables error when connecting to aws

I am trying to execute playbook for stopping ec2 instances and other playbooks
when i execute a playbook i get the following error
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "ec2_url": null, "key_material": null, "name": "ansible-sae", "profile": null, "region": "us-east-1", "security_token": null, "state": "present", "validate_certs": true, "wait": false, "wait_timeout": "300"}, "module_name": "ec2_key"}, "msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"}
i have added the environment variables in my .bashrc file but still i am getting the error my .bashrc file, but when i include the aws access key and secret key in playbook it's executing with out error i have given poweruser access to the credentials i have provided and i can see env variables when i open .bashrc meaning i have saved env. variables correctly i am not able to understand why i got this error
you can see the aws acces key and secret access key variable:
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging
feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
export AWS_ACCESS_KEY_ID='XXXXXXXXXXXX'
export AWS_SECRET_ACCESS_KEY='XXXXXXXXXXXXXXX'
and the playbook would be something like
Playbook format
- hosts: local
connection: local
remote_user: ansible_user
become: yes
gather_facts: no
tasks:
- name: Create a new key pair
ec2_key:
name: ansible-sae
region: us-east-1
state: present
When i put the same creds in playbook it works
Ansible version 2.1.0.0, rhel 7.2(maipo)
i was going through git and found it was a bug, seems like many people were having this problem.
https://github.com/ansible/ansible/issues/10638