I have recently started using Ansible to automate the deployment of a docker image to an Azure Kubernetes Service.
I have an ansible file called azure_create_aks.yml. I am running the following command on my mac ansible-playbook azure_create_aks.yml but it fails with the following (snippet from the stack trace):
msrest.exceptions.AuthenticationError: , AdalError: Get Token request returned http error: 400 and server response: Bad Request
I've tried uninstalling ansible and azure-cli and reinstalled using the following:
- brew update && brew install azure-cli
- az aks install-cli
- pip3 install ansible[azure]
I also tried uninstalling python 3 so that it would be using python 2 instead. From looking around on Stack Overflow, I think i might be encountering a dependency issue with msrestazure or possible an issue with the version of pip or python I have on my local.
After running ansible-playbook azure_create_aks.yml, I get the following:
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Create Azure Kubernetes Service] *********************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************
ok: [localhost]
TASK [Create resource group] *******************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/Users/hughej/.ansible/tmp/ansible-tmp-1569328685.354382-6386128387997/AnsiballZ_azure_rm_resourcegroup.py:18: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\n import imp\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.7/site-packages/msrestazure/azure_active_directory.py\", line 366, in set_token\n self.secret\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 179, in acquire_token_with_client_credentials\n return self._acquire_token(token_func)\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 128, in _acquire_token\n return token_func(self)\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 177, in token_func\n return token_request.get_token_with_client_credentials(client_secret)\n File \"/usr/local/lib/python3.7/site-packages/adal/token_request.py\", line 310, in get_token_with_client_credentials\n token = self._oauth_get_token(oauth_parameters)\n File \"/usr/local/lib/python3.7/site-packages/adal/token_request.py\", line 112, in _oauth_get_token\n return client.get_token(oauth_parameters)\n File \"/usr/local/lib/python3.7/site-packages/adal/oauth2_client.py\", line 289, in get_token\n raise AdalError(return_error_string, error_response)\nadal.adal_error.AdalError: Get Token request returned http error: 400 and server response: Bad Request\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/Users/hughej/.ansible/tmp/ansible-tmp-1569328685.354382-6386128387997/AnsiballZ_azure_rm_resourcegroup.py\", line 114, in <module>\n _ansiballz_main()\n File \"/Users/hughej/.ansible/tmp/ansible-tmp-1569328685.354382-6386128387997/AnsiballZ_azure_rm_resourcegroup.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/Users/hughej/.ansible/tmp/ansible-tmp-1569328685.354382-6386128387997/AnsiballZ_azure_rm_resourcegroup.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n File \"/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/imp.py\", line 169, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 630, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n File \"/var/folders/2t/30gk2pfx5_n08tfd45g3v674b8c1y8/T/ansible_azure_rm_resourcegroup_payload_6hqj1_fs/__main__.py\", line 266, in <module>\n File \"/var/folders/2t/30gk2pfx5_n08tfd45g3v674b8c1y8/T/ansible_azure_rm_resourcegroup_payload_6hqj1_fs/__main__.py\", line 262, in main\n File \"/var/folders/2t/30gk2pfx5_n08tfd45g3v674b8c1y8/T/ansible_azure_rm_resourcegroup_payload_6hqj1_fs/__main__.py\", line 144, in __init__\n File \"/var/folders/2t/30gk2pfx5_n08tfd45g3v674b8c1y8/T/ansible_azure_rm_resourcegroup_payload_6hqj1_fs/ansible_azure_rm_resourcegroup_payload.zip/ansible/module_utils/azure_rm_common.py\", line 318, in __init__\n File \"/var/folders/2t/30gk2pfx5_n08tfd45g3v674b8c1y8/T/ansible_azure_rm_resourcegroup_payload_6hqj1_fs/ansible_azure_rm_resourcegroup_payload.zip/ansible/module_utils/azure_rm_common.py\", line 1095, in __init__\n File \"/usr/local/lib/python3.7/site-packages/msrestazure/azure_active_directory.py\", line 354, in __init__\n self.set_token()\n File \"/usr/local/lib/python3.7/site-packages/msrestazure/azure_active_directory.py\", line 370, in set_token\n raise_with_traceback(AuthenticationError, \"\", err)\n File \"/usr/local/lib/python3.7/site-packages/msrest/exceptions.py\", line 54, in raise_with_traceback\n raise error.with_traceback(exc_traceback)\n File \"/usr/local/lib/python3.7/site-packages/msrestazure/azure_active_directory.py\", line 366, in set_token\n self.secret\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 179, in acquire_token_with_client_credentials\n return self._acquire_token(token_func)\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 128, in _acquire_token\n return token_func(self)\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 177, in token_func\n return token_request.get_token_with_client_credentials(client_secret)\n File \"/usr/local/lib/python3.7/site-packages/adal/token_request.py\", line 310, in get_token_with_client_credentials\n token = self._oauth_get_token(oauth_parameters)\n File \"/usr/local/lib/python3.7/site-packages/adal/token_request.py\", line 112, in _oauth_get_token\n return client.get_token(oauth_parameters)\n File \"/usr/local/lib/python3.7/site-packages/adal/oauth2_client.py\", line 289, in get_token\n raise AdalError(return_error_string, error_response)\nmsrest.exceptions.AuthenticationError: , AdalError: Get Token request returned http error: 400 and server response: Bad Request\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
I'm expected the ansible playbook command to run and deploy to Azure. However this authentication error is stopping the process.
By the way, here's my ansible playbook file(sanitised):
- name: Create Azure Kubernetes Service
hosts: localhost
connection: local
vars:
resource_group: pipeline-in-a-box
location: uksouth
aks_name: pipeline-in-a-box-cluster
username: "devOpsBot"
ssh_key: "My public SSH key"
client_id: "service principle id"
client_secret: "service principle password"
kubernetes_version: "1.14.6"
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}"
- name: Create a managed Azure Container Services (AKS) cluster
azure_rm_aks:
name: "{{ aks_name }}"
location: "{{ location }}"
resource_group: "{{ resource_group }}"
dns_prefix: "{{ aks_name }}"
kubernetes_version: "{{ kubernetes_version }}"
linux_profile:
admin_username: "{{ username }}"
ssh_key: "{{ ssh_key }}"
service_principal:
client_id: "{{ client_id }}"
client_secret: "{{ client_secret }}"
agent_pool_profiles:
- name: default
count: 2
vm_size: Standard_D2_v2
tags:
Environment: Test
- name: Create Azure Storage Account
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: piabstorage
type: Standard_RAGRS
tags:
testing: testing
delete: on-exit
- name: Create managed disk
azure_rm_manageddisk:
name: piabdisk
location: uksouth
resource_group: "{{ resource_group }}"
disk_size_gb: 1
- name: Create an azure container registry
azure_rm_containerregistry:
name: piabregistry
location: "{{ location }}"
resource_group: "{{ resource_group }}"
admin_user_enabled: True
sku: Basic
register: acr_result
- name: Push docker image to comtainer registry
docker_image:
name: atlassian/confluence-server
repository: piabregistry.azurecr.io
push: yes
source: pull
- name: Create Azure Container Instance
azure_rm_containerinstance:
resource_group: "{{ resource_group }}"
name: piabcontainer
ip_address: public
ports:
- "8090"
- "8091"
registry_login_server: piabregistry.azurecr.io
registry_username: piabregistry
registry_password: "{{ acr_result.credentials.password }}"
containers:
- name: confluence-server
ports:
- "8090"
- "8091"
image: atlassian/confluence-server
- name: Get details of the AKS
azure_rm_aks_facts:
name: aksfacts
resource_group: "{{ resource_group }}"
show_kubeconfig: user
- name: Show AKS cluster detail
debug:
var: output.aks[0]
```
Related
I've Strimzi Kafka installed on GKE(GCP), and i'm trying to install Confluent Schema registry referring link -
https://github.com/lsst-sqre/strimzi-registry-operator
Steps followed:
Installed strimzi-registry-operator in namespace schema-registry-operator,
Note : Strimzi Kafka is installed in namespace - kafka
Command used :
helm repo add lsstsqre https://lsst-sqre.github.io/charts/
helm repo update
helm install ssr lsstsqre/strimzi-registry-operator -n schema-registry-operator --values values.yaml
values.yaml:
------------
# -- Name of the Strimzi Kafka cluster
clusterName: "versa-kafka-gke"
# -- Namespace where the Strimzi Kafka cluster is deployed
clusterNamespace: "kafka"
# -- Namespace where the strimzi-registry-operator is deployed
operatorNamespace: "strimzi-registry-operator"
Step 2:
Installed the kafkatopic (registry-schemas),kafkauser in schema - 'kafka'
(Note : Strimzi kafka is also installed in the namespace - kafka)
kafkatopic.yaml
----------------
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: registry-schemas
labels:
strimzi.io/cluster: versa-kafka-gke
spec:
partitions: 1
replicas: 3
config:
# http://kafka.apache.org/documentation/#topicconfigs
cleanup.policy: compact
kafkauser.yaml
--------------
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: confluent-schema-registry
labels:
strimzi.io/cluster: versa-kafka-gke
spec:
authentication:
type: tls
authorization:
# Official docs on authorizations required for the Schema Registry:
# https://docs.confluent.io/current/schema-registry/security/index.html#authorizing-access-to-the-schemas-topic
type: simple
acls:
# Allow all operations on the registry-schemas topic
# Read, Write, and DescribeConfigs are known to be required
- resource:
type: topic
name: registry-schemas
patternType: literal
operation: All
type: allow
# Allow all operations on the schema-registry* group
- resource:
type: group
name: schema-registry
patternType: prefix
operation: All
type: allow
# Allow Describe on the __consumer_offsets topic
- resource:
type: topic
name: __consumer_offsets
patternType: literal
operation: Describe
type: allow
Step3 :
Installed StrimziSchemaRegistry in schema - strimzi-schema-operator
Here is what i see in the schema svchema-registry-operator:
(base) Karans-MacBook-Pro:schema-registry-yamls karanalang$ kc get all -n schema-registry-operator
NAME READY STATUS RESTARTS AGE
pod/strimzi-registry-operator-7867fbc985-rddqw 1/1 Running 0 121m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/strimzi-registry-operator 1/1 1 1 121m
NAME DESIRED CURRENT READY AGE
replicaset.apps/strimzi-registry-operator-7867fbc985 1 1 1 121m
Also, when i logon to the SchemaRegistryOperator pod, i see the following error.
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/aiokits/aiotasks.py", line 108, in guard
await coro
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/reactor/queueing.py", line 175, in watcher
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 82, in infinite_watch
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 159, in continuous_watch
objs, resource_version = await fetching.list_objs(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/fetching.py", line 28, in list_objs
rsp = await api.get(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 111, in get
response = await request(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/auth.py", line 45, in wrapper
return await fn(*args, **kwargs, context=context)
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 85, in request
await errors.check_response(response) # but do not parse it!
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/errors.py", line 150, in check_response
raise cls(payload, status=response.status) from e
kopf._cogs.clients.errors.APIForbiddenError: ('secrets is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "secrets" in API group "" in the namespace "kafka"', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'secrets is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "secrets" in API group "" in the namespace "kafka"', 'reason': 'Forbidden', 'details': {'kind': 'secrets'}, 'code': 403})
[2022-11-30 23:27:39,605] kopf._cogs.clients.w [DEBUG ] Stopping the watch-stream for strimzischemaregistries.v1beta1.roundtable.lsst.codes in 'kafka'.
[2022-11-30 23:27:39,606] kopf._core.reactor.o [ERROR ] Watcher for strimzischemaregistries.v1beta1.roundtable.lsst.codes#kafka has failed: ('strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', 'reason': 'Forbidden', 'details': {'group': 'roundtable.lsst.codes', 'kind': 'strimzischemaregistries'}, 'code': 403})
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/errors.py", line 148, in check_response
response.raise_for_status()
File "/opt/venv/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 1004, in raise_for_status
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.44.0.1:443/apis/roundtable.lsst.codes/v1beta1/namespaces/kafka/strimzischemaregistries')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/aiokits/aiotasks.py", line 108, in guard
await coro
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/reactor/queueing.py", line 175, in watcher
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 82, in infinite_watch
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 159, in continuous_watch
objs, resource_version = await fetching.list_objs(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/fetching.py", line 28, in list_objs
rsp = await api.get(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 111, in get
response = await request(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/auth.py", line 45, in wrapper
return await fn(*args, **kwargs, context=context)
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 85, in request
await errors.check_response(response) # but do not parse it!
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/errors.py", line 150, in check_response
raise cls(payload, status=response.status) from e
kopf._cogs.clients.errors.APIForbiddenError: ('strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', 'reason': 'Forbidden', 'details': {'group': 'roundtable.lsst.codes', 'kind': 'strimzischemaregistries'}, 'code': 403})
Few questions on this:
I don't see the schemaRegistry pod getting created which would be listening on port 8081, only the StrimziSchemaRegistry object is created in schema - strimzi-schema-operator
How do i get access to the SchemaRegistry url so i can upload schemas to it ?
How do i resolve the permission error above ?
Do I need to create a separate service account for installinh schema-registry ?
Pls advise.
tia!
Update :
This is an existing issue with the Strimzi Schema Registry operator (https://github.com/lsst-sqre/strimzi-registry-operator/issues/79).
Essentially, the ServiceAccount is not created in the correct namespace, I re-created the ServiceAccount in namespace - strimzi-registry-operator to resolve the issue.
However, i'm facing another issue (existing issue - https://github.com/lsst-sqre/strimzi-registry-operator/issues/84), the schema registry is not getting created.
Additional Details :
Schema-Registry-operator is deployed in namespace - 'strimzi-registry-operator'
Strimzi Kafka(cluster - versa-kafka-gke) - deployed in namespace - 'kafka'
Part of the Strimzi kafka yaml, with version & listeners :
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: versa-kafka-gke #1
spec:
kafka:
version: 3.0.0
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: tls
- name: external
port: 9094
type: loadbalancer
tls: true
authentication:
type: tls
authorization:
type: simple
KafkaUser (confluent-schema-registry) & KafkaTopic (registry-schemas), deployed in namespace - 'kafka'
Confluent Schema Registry - deployed in namespace - 'kafka' (
Error :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Error Logging 73s kopf Handler 'create_registry' failed with an exception. Will retry.
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 279, in execute_handler_once
result = await invoke_handler(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 374, in invoke_handler
result = await invocation.invoke(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/invocation.py", line 139, in invoke
await asy...al/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/handlers/createregistry.py", line 131, in create_registry
bootstrap_server = get_kafka_bootstrap_server(
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/deployments.py", line 83, in get_kafka_bootstrap_server
raise kopf.Error(msg, delay=10)
AttributeError: module 'kopf' has no attribute 'Error'
Normal Logging 73s kopf Creating a new Schema Registry deployment: confluent-schema-registry with listener=tls (security protocol=tls) and strimzi-version=v1beta2 serviceType=ClusterIP image=confluentinc/cp-schema-registry:7.2.1
Normal Logging 12s kopf Creating a new Schema Registry deployment: confluent-schema-registry with listener=tls (security protocol=tls) and strimzi-version=v1beta2 serviceType=ClusterIP image=confluentinc/cp-schema-registry:7.2.1
Error Logging 12s kopf Handler 'create_registry' failed with an exception. Will retry.
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 279, in execute_handler_once
result = await invoke_handler(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 374, in invoke_handler
result = await invocation.invoke(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/invocation.py", line 139, in invoke
await asy...al/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/handlers/createregistry.py", line 131, in create_registry
bootstrap_server = get_kafka_bootstrap_server(
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/deployments.py", line 83, in get_kafka_bootstrap_server
raise kopf.Error(msg, delay=10)
AttributeError: module 'kopf' has no attribute 'Error'
I have the below playbook. It correctly creates a security group in step. I can check via the AWS console that this really happens and the security group looks exactly as expected and has the ID sg-0dcf7ca7899835648.
But in step 2 I get the below error message which essentially states that the just created security group does not exist: "The security group 'sg-0dcf7ca7899835648' does not exist."
The same happens when I manually create SGs or manually insert the ID. The same happens with the group name.
How can I use the just created security group when launching a new EC2 instance?
The playbook:
---
- name: Create AWS EC2 instances and start them
hosts: localhost
gather_facts: false
vars:
instances_count: 1
tasks:
- name: Setup test security group
amazon.aws.ec2_group:
name: sg_conzone_test_01
description: "ConZone security group with access to several local ports. DONT USE FOR PRODUCTION!!!"
vpc_id: vpc-6a3ebe00
region: eu-central-1
profile: conzone_root
rules:
- proto: tcp
from_port: 3000
to_port: 3000
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 8000
to_port: 8000
- proto: tcp
from_port: 8983
to_port: 8983
register: security_group
- name: Create AWS EC2 instance
amazon.aws.ec2:
profile: conzone_root
region: eu-central-1
key_name: ConZone-Testserver-Key01
instance_type: t2.large
image: ami-05f7491af5eef733a
wait: yes
group_id: "{{ security_group.group_id }}"
count: "{{ instances_count }}"
# vpc_subnet_id: vpc-6a3ebe00
assign_public_ip: yes
instance_tags:
Environment: test
os: ubuntu
ansible_user: ubuntu
db: postgres
solr: yes
register: ec2
The error message:
TASK [Create AWS EC2 instance] *************************************************************************************
task path: /Users/tkx/devel/conzone/conzone-config/playbooks/create_ec2.yml:32
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: <Response><Errors><Error><Code>InvalidGroup.NotFound</Code><Message>The security group 'sg-0dcf7ca7899835648' does not exist</Message></Error></Errors><RequestID>8241e3ea-5a59-4383-a5ac-f3a568f91960</RequestID></Response>
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 100, in <module>\n File \"<stdin>\", line 92, in _ansiballz_main\n File \"<stdin>\", line 40, in invoke_module\n File \"/usr/local/Caskroom/miniconda/base/lib/python3.9/runpy.py\", line 210, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/local/Caskroom/miniconda/base/lib/python3.9/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/local/Caskroom/miniconda/base/lib/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/var/folders/2w/g4gydjx960qbfxyvmbd3_sl00000gn/T/ansible_amazon.aws.ec2_payload_lpsz83a1/ansible_amazon.aws.ec2_payload.zip/ansible_collections/amazon/aws/plugins/modules/ec2.py\", line 1740, in <module>\n File \"/var/folders/2w/g4gydjx960qbfxyvmbd3_sl00000gn/T/ansible_amazon.aws.ec2_payload_lpsz83a1/ansible_amazon.aws.ec2_payload.zip/ansible_collections/amazon/aws/plugins/modules/ec2.py\", line 1724, in main\n File \"/var/folders/2w/g4gydjx960qbfxyvmbd3_sl00000gn/T/ansible_amazon.aws.ec2_payload_lpsz83a1/ansible_amazon.aws.ec2_payload.zip/ansible_collections/amazon/aws/plugins/modules/ec2.py\", line 1042, in create_instances\n File \"/usr/local/Caskroom/miniconda/base/lib/python3.9/site-packages/boto/ec2/connection.py\", line 2983, in get_all_security_groups\n return self.get_list('DescribeSecurityGroups', params,\n File \"/usr/local/Caskroom/miniconda/base/lib/python3.9/site-packages/boto/connection.py\", line 1186, in get_list\n raise self.ResponseError(response.status, response.reason, body)\nboto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Response><Errors><Error><Code>InvalidGroup.NotFound</Code><Message>The security group 'sg-0dcf7ca7899835648' does not exist</Message></Error></Errors><RequestID>8241e3ea-5a59-4383-a5ac-f3a568f91960</RequestID></Response>\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
It's because (as your commented code indicates) one needs to specify vpc_subnet_id: otherwise AWS will use the "default" VPC which is evidently not the vpc-6a3ebe00 in which you created the SG
I have a playbook that starts an ec2 instance. But I keep getting the following error when I run it.
---
- name: Create an ec2 instance
hosts: localhost
gather_facts: false
vars:
region: us-east-1
instance_type: t2.micro
ami: ami-01ac7d9c1179d7b74
keypair: priyajdm
tasks:
- name: Create an ec2 instance
ec2:
key_name: "{{ keypair }}"
group: launch-wizard-31
instance_type: "{{ instance_type }}"
image: "{{ ami }}"
wait: true
region: "{{ region }}"
count: 1
vpc_subnet_id: subnet-02f498e16fd56c277
assign_public_ip: yes
register: ec2
Error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AuthFailureAWS was not able to validate the provided access credentialscb70bd1a-b7ec-41aa-895a-fabf9e0b6cfe
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1553599571.08-120541135236553/AnsiballZ_ec2.py\", line 113, in \n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1553599571.08-120541135236553/AnsiballZ_ec2.py\", line 105, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1553599571.08-120541135236553/AnsiballZ_ec2.py\", line 48, in invoke_module\n imp.load_module('main', mod, module, MOD_DESC)\n File \"/tmp/ansible_ec2_payload_hXpgWw/main.py\", line 1702, in \n File \"/tmp/ansible_ec2_payload_hXpgWw/main.py\", line 1686, in main\n File \"/tmp/ansible_ec2_payload_hXpgWw/main.py\", line 989, in create_instances\n File \"/home/ubuntu/.local/lib/python2.7/site-packages/boto/vpc/init.py\", line 1152, in get_all_subnets\n return self.get_list('DescribeSubnets', params, [('item', Subnet)])\n File \"/home/ubuntu/.local/lib/python2.7/site-packages/boto/connection.py\", line 1186, in get_list\n raise self.ResponseError(response.status, response.reason, body)\nboto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized\n\nAuthFailureAWS was not able to validate the provided access credentialscb70bd1a-b7ec-41aa-895a-fabf9e0b6cfe\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
Where do you keep your AWS credentials? You have 3 option:
Use boto configuration file
Set needed enviroment varaibles
Set EC2 module AWS credentials parameters in your task
Last options is the worst practice but easiest way to solve authentication problems.
Check the EC2 Module’s notes.
I've more than 100 running instances
Example I have 10 running instances with tag name dev-redis-slave. And now I want to create new tag -> tag ServiceName: redis-slave and tag ServiceGroup: redis
First of all, I try following this guide: https://aws.amazon.com/blogs/apn/getting-started-with-ansible-and-dynamic-amazon-ec2-inventory-management/
Then I try to excecute ec2.py --list | grep redis. Then the output is: tag_Name_dev_redis_slave. Also I try to ping: ansible -m ping tag_Name_dev_redis_slave and success.
Next I want to create new tag for dev-redis-slave using ansible.,
I create yaml file like this (playbook.yaml).
- hosts: localhost
gather_facts: yes
tasks:
- name: Adding tags
ec2_tag:
resource: tag_Name_dev_redis_slave
region: xxx
state: present
tags:
ServiceGroup: redis
ServiceName: redis-slave
I run ansible-playbook playbook.yaml But It give an error.
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: <Response><Errors><Error><Code>InvalidID</Code><Message>The ID 'tag_Name_dev_redis_slave' is not valid</Message></Error></Errors><RequestID>de51df48-df26-4312-8d03-4c8ca2b993bf</RequestID></Response>
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/var/folders/5b/hhh0h2fx2cxf7_24dmn05ht00000gq/T/ansible_460t6taz/ansible_module_ec2_tag.py\", line 183, in <module>\n main()\n File \"/var/folders/5b/hhh0h2fx2cxf7_24dmn05ht00000gq/T/ansible_460t6taz/ansible_module_ec2_tag.py\", line 160, in main\n ec2.create_tags(resource, dictadd)\n File \"/Users/fourirakbar/Documents/ansible/venv/lib/python3.6/site-packages/boto/ec2/connection.py\", line 4219, in create_tags\n return self.get_status('CreateTags', params, verb='POST')\n File \"/Users/fourirakbar/Documents/ansible/venv/lib/python3.6/site-packages/boto/connection.py\", line 1227, in get_status\n raise self.ResponseError(response.status, response.reason, body)\nboto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Response><Errors><Error><Code>InvalidID</Code><Message>The ID 'tag_Name_dev_redis_slave' is not valid</Message></Error></Errors><RequestID>de51df48-df26-4312-8d03-4c8ca2b993bf</RequestID></Response>\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
I also try to follow this: http://ansible-manual.readthedocs.io/en/latest/ec2_tag_module.html#examples . But also error.
how to fix it?
sorry, I was wrong to define resource. I think resource=tag, but the correct one is resource=instance id. Thankyou for helping
so I change my playbook.yaml like this
- hosts: localhost
gather_facts: yes
vars:
development:
- YOUR INSTANCE ID
- YOUR INSTANCE ID
tasks:
- name: Adding tags
ec2_tag:
resource: "{{ item }}"
region: YOUR INSTANCE REGION
args:
tags:
ServiceGroup: redis
ServiceName: redis-slave
with_items: "{{ development }}"
but I was wondering, maybe next I don't need to describe instance id one by one (imagine if we've so many instance with different group tag).
guys if you have an answer, please let me know
thank you very much
This question already has answers here:
How to set Linux environment variables with Ansible
(7 answers)
Closed 6 years ago.
I have the following playbook:
- hosts: localhost
connection: local
remote_user: test
gather_facts: no
vars_files:
- files/aws_creds.yml
- files/info.yml
tasks:
- name: Basic provisioning of EC2 instance
ec2:
assign_public_ip: no
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
region: "{{ aws_region }}"
image: "{{ standard_ami }}"
instance_type: "{{ free_instance }}"
key_name: "{{ ssh_keyname }}"
count: 3
state: present
group_id: "{{ secgroup_id }}"
#vpc_subnet_id: "{{ private_subnet_id }}"
wait: no
#delete_on_termination: yes
instance_tags:
Name: Dawny33Template
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: launched
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
- name: Install dependencies
yum:
name=git
state=present
sudo: yes
- name: Install Python libs
easy_install:
name: boto3
state: latest
sudo: yes
- name: check out a git repository
git: repo={{ repo_url }} dest=/home/ec2-user/AnsibleDir/GitRepo accept_hostkey=yes force=yes
vars:
repo_url: https://github.com/Dawny33/AnsibleExperiments
become: yes
- name: Go to the folder and execute command
command: chmod 0755 /home/ec2-user/AnsibleDir/GitRepo/processing.py
become: yes
become_user: root
- name: Set credentials
shell: export AWS_ACCESS_KEY_ID=''
become: yes
become_user: root
- name: Set credentials2
shell: export AWS_SECRET_ACCESS_KEY=''
become: yes
become_user: root
- name: Run Py script
command: /home/ec2-user/AnsibleDir/GitRepo/processing.py {{ N }} {{ bucket_name }}
become: yes
become_user: root
- name: Terminate instances that were previously launched
connection: local
become: false
ec2:
state: 'absent'
instance_ids: '{{ ec2.instance_ids }}'
region: '{{ aws_region }}'
In this, I checkout a git repo and run a py file, which uses boto.
So, how do I set up AWS credentials in the dynamically created EC2 instances? Is there an Ansible module for doing so?
PS: The shell modules for exporting the keys are not working. They are throwing the following error:
"stderr": "sh: s3cmd: command not found\nTraceback (most recent call last):\n File \"/home/ec2-user/AnsibleDir/GitRepo/processing.py\", line 48, in <module>\n print get_details(N, str(bucket_name))\n File \"/home/ec2-user/AnsibleDir/GitRepo/processing.py\", line 37, in get_details\n for obj in bucket.objects.all():\n File \"/usr/local/lib/python2.7/site-packages/boto3-1.4.4-py2.7.egg/boto3/resources/collection.py\", line 83, in __iter__\n for page in self.pages():\n File \"/usr/local/lib/python2.7/site-packages/boto3-1.4.4-py2.7.egg/boto3/resources/collection.py\", line 166, in pages\n for page in pages:\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/paginate.py\", line 102, in __iter__\n response = self._make_request(current_kwargs)\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/paginate.py\", line 174, in _make_request\n return self._method(**current_kwargs)\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/client.py\", line 253, in _api_call\n return self._make_api_call(operation_name, kwargs)\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/client.py\", line 530, in _make_api_call\n operation_model, request_dict)\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/endpoint.py\", line 141, in make_request\n return self._send_request(request_dict, operation_model)\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/endpoint.py\", line 166, in _send_request\n request = self.create_request(request_dict, operation_model)\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/endpoint.py\", line 150, in create_request\n operation_name=operation_model.name)\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/hooks.py\", line 227, in emit\n return self._emit(event_name, kwargs)\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/hooks.py\", line 210, in _emit\n response = handler(**kwargs)\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/signers.py\", line 90, in handler\n return self.sign(operation_name, request)\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/signers.py\", line 147, in sign\n auth.add_auth(request)\n File \"/usr/local/lib/python2.7/site-packages/botocore-1.5.7-py2.7.egg/botocore/auth.py\", line 679, in add_auth\n raise NoCredentialsError\nbotocore.exceptions.NoCredentialsError: Unable to locate credentials",
"stdout": "",
"stdout_lines": [],
"warnings": []
}
The script is: https://github.com/Dawny33/AnsibleExperiments/blob/master/processing.py
You can do either of the following:
1) As suggested by #konstantin in the comment of your question you can export the keys as environment variables.
2) For AWS related deployments/AWS EC2 instances which require API keys, you could use the IAM instance roles which have the required access that your application needs.