terraform running ansible on ec2 instance - amazon-web-services

I have Terraform trying that is trying to run Ansible when creating an ec2 instance.
resource "aws_instance" "jenkins-master" {
depends_on = [aws_main_route_table_association.set-master-default-rt-assoc, aws_kms_alias.master_ebs_cmk]
provider = aws.region-master
ami = data.aws_ssm_parameter.linuxAmi.value
instance_type = var.instance-type
key_name = aws_key_pair.master-key.key_name
associate_public_ip_address = true
vpc_security_group_ids = [aws_security_group.jenkins-sg.id]
subnet_id = aws_subnet.master_subnet_1.id
ipv6_address_count = 1
root_block_device {
encrypted = false
volume_size = 30
}
provisioner "local-exec" {
command = <<EOF
aws --profile myprofile ec2 wait instance-status-ok --region us-east-1 --instance-ids ${self.id} \
&& ansible-playbook --extra-vars 'passed_in_hosts=tag_Name_${self.tags.Name}' ansible_templates/install_jenkins.yaml
EOF
}
}
My terraform works if I export the key id and secret of "myprofile" as environmental variables
export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=YYYYYYYYYYYYYYYYYYYYYYYYYYY
If I do not export "AWS_ACCESS_KEY_ID" and "AWS_SECRET_ACCESS_KEY" I get the following error
....
aws_instance.jenkins-master (local-exec): Executing: ["/bin/sh" "-c" "aws --profile myprofile ec2 wait instance-status-ok --region us-east-1 --instance-ids i-04db214244937ed60 \\\n&& ansible-playbook --extra-vars 'passed_in_hosts=tag_Name_jenkins_master_tf' ansible_templates/install_jenkins.yaml\n"]
....
aws_instance.jenkins-master (local-exec): [WARNING]: * Failed to parse /home/pcooke/workspace/learn-
aws_instance.jenkins-master (local-exec): terraform/modules/ansible_templates/inventory_aws/tf_aws_ec2.yml with auto
aws_instance.jenkins-master (local-exec): plugin: Insufficient boto credentials found. Please provide them in your
aws_instance.jenkins-master (local-exec): inventory configuration file or set them as environment variables.
aws_instance.jenkins-master (local-exec): [WARNING]: * Failed to parse /home/pcooke/workspace/learn-
aws_instance.jenkins-master (local-exec): terraform/modules/ansible_templates/inventory_aws/tf_aws_ec2.yml with yaml
aws_instance.jenkins-master (local-exec): plugin: Plugin configuration YAML file, not YAML inventory
aws_instance.jenkins-master (local-exec): [WARNING]: * Failed to parse /home/pcooke/workspace/learn-
aws_instance.jenkins-master (local-exec): terraform/modules/ansible_templates/inventory_aws/tf_aws_ec2.yml with ini
aws_instance.jenkins-master (local-exec): plugin: Invalid host pattern '---' supplied, '---' is normally a sign this is a
aws_instance.jenkins-master (local-exec): YAML file.
aws_instance.jenkins-master (local-exec): [WARNING]: Unable to parse /home/pcooke/workspace/learn-
aws_instance.jenkins-master (local-exec): terraform/modules/ansible_templates/inventory_aws/tf_aws_ec2.yml as an
aws_instance.jenkins-master (local-exec): inventory source
aws_instance.jenkins-master (local-exec): [WARNING]: No inventory was parsed, only implicit localhost is available
aws_instance.jenkins-master (local-exec): [WARNING]: provided hosts list is empty, only localhost is available. Note that
aws_instance.jenkins-master (local-exec): the implicit localhost does not match 'all'
aws_instance.jenkins-master (local-exec): [WARNING]: Could not match supplied host pattern, ignoring:
aws_instance.jenkins-master (local-exec): tag_Name_jenkins_master_tf
Is there a simple way to pass the AWS profile to Ansible so that Ansible can get the right key id and secret????

Is there a simple way to pass the AWS profile to Ansible so that Ansible can get the right key id and secret?
As what user is Ansible executing the task?
You should include the key id and secret in a config file on the system under that user:
$ cat /home/myuser/.aws/config
[default]
aws_access_key_id=...
aws_secret_access_key=...
region=...
output=...
This way, Ansible should read the key content from the system.
Another solution would be to add environment variables to the task:
- hosts: local_test
gather_facts: false
vars:
env_vars:
aws_access_key_id: abc
aws_secret_key: abc
tasks:
- name: Terraform stuff
shell: env
environment: "{{ env_vars }}"
register: my_env
- debug:
msg: "{{ my_env }}"
Note that describing secrets in a playbook isn't considered safe. In order to do that safely one can use Ansible-vault.

Related

error configuring S3 Backend: no valid credential sources for S3 Backend found

I've been trying to add CI/CD pipeline circleci to my AWS project written in Terraform.
The problem is, terraform init plan apply works in my local machine, but it throws this error in CircleCI.
Error -
Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
My circleCi config is this -
version: 2.1
orbs:
python: circleci/python#1.5.0
# terraform: circleci/terraform#3.1.0
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
# Invoke jobs via workflows
workflows:
.......
And my init.sh is -
cd ./Terraform
echo "arg: $1"
if [[ "$1" == "dev" || "$1" == "stage" || "$1" == "prod" ]];
then
echo "environement: $1"
terraform init -migrate-state -backend-config=backend.$1.conf -var-file=terraform.$1.tfvars
else
echo "Wrong Argument"
echo "Pass 'dev', 'stage' or 'prod' only."
fi
My main.tf is -
provider "aws" {
profile = "${var.profile}"
region = "${var.region}"
}
terraform {
backend "s3" {
}
}
And `backend.dev.conf is -
bucket = "bucket-name"
key = "mystate.tfstate"
region = "ap-south-1"
profile = "dev"
Also, my terraform.dev.tfvars is -
region = "ap-south-1"
profile = "dev"
These work perfectly with in my local unix (mac m1), but throws error in circleCI for backend. Yes, I've added environment variables with my aws_secret_access_key and aws_access_key_id, still it doesn't work.
I've seen so many tutorials and nothing seems to solve this, I don't want to write aws credentials in my code. any idea how I can solve this?
Update:
I have updated my pipeline to this -
version: 2.1
orbs:
python: circleci/python#1.5.0
aws-cli: circleci/aws-cli#3.1.3
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
# Checkout the code as the first step. This is a dedicated
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
aws-cli-cred-setup:
executor: aws-cli/default
steps:
- aws-cli/setup:
aws-access-key-id: aws_access_key_id
aws-secret-access-key: aws_secret_access_key
aws-region: region
- run:
name: get aws acc info
command: aws sts get-caller-identity
terraform-setup:
executor: aws-cli/default
working_directory: ~/project
steps:
- checkout
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
context: terraform
# Invoke jobs via workflows
workflows:
dev_workflow:
jobs:
- build:
filters:
branches:
only: main
- aws-cli-cred-setup
# context: aws
- terraform-setup:
requires:
- aws-cli-cred-setup
But it still throws the same error.
You have probably added the aws_secret_access_key and aws_access_key_id to your project settings. But I don't see them being used in your pipeline configuration. You should do something like, so they are known during runtime:
version: 2.1
orbs:
python: circleci/python#1.5.0
jobs:
build:
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
environment:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
steps:
- run:
name: Check python version
command: python --version
...
I would advise you read about environment variables in the documentation.
Ok I managed to fix this issue. You have to remove profile from provider and other .tf files files.
So my main.tf file is -
provider "aws" {
region = "${var.region}"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
backend "s3" {
}
}
And backend.dev.conf is -
bucket = "bucket"
key = "dev/xxx.tfstate"
region = "ap-south-1"
And most importantly, You have to put acccess key, access key id and region inside circleci-> your project -> environment variable,
And you have to setup AWS CLI on circleci, apparently inside a job config.yml-
version: 2.1
orbs:
python: circleci/python#1.5.0
aws-cli: circleci/aws-cli#3.1.3
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
plan-apply:
executor: aws-cli/default
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
working_directory: ~/project
steps:
- checkout
- aws-cli/setup:
aws-access-key-id: aws_access_key_id
aws-secret-access-key: aws_secret_access_key
aws-region: region
- run:
name: get aws acc info
command: aws sts get-caller-identity
- run:
name: Init infrastructure
command: sh scripts/init.sh dev
- run:
name: Plan infrastructure
command: sh scripts/plan.sh dev
- run:
name: Apply infrastructure
command: sh scripts/apply.sh dev
.....
.....
This solved the issue. But you have to init, plan and apply inside the job where you set up aws cli. I might be wrong to do setup and plan inside same job, but I'm learning now and this did the job. API changed and old tutorials don't work nowadays.
Comment me your suggestions if any.
Adding a profile to your backend will fix this issue. Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
backend "s3" {
bucket = "terraform-state"
region = "ap-south-1"
key = "dev/xxx.tfstate"
profile = "myAwsCliProfile"
}
}

Use the output of local-exec to fetch the ARN and use it as variable

Im using local-exec provisioner with AWS CLI command to create Amazon Interactive Video Service Channel. (AWS IVS)
provisioner "local-exec" {
command = "aws ivs create-channel --name test | tee test.txt"
}
This is the content of test.txt:
channel:
arn: arn:aws:ivs:us-east-1:XXXXXXX:channel/3ruIGo4qyo8t
authorized: false
ingestEndpoint: 5xxxxxxd29b.global-contribute.live-video.net
latencyMode: LOW
name: test
playbackUrl: https://XXXXXX.us-east-1.playback.live-video.net/api/video/v1/us-east-1.XXXXXXXXXX.channel.3ruIGo4qyo8t.m3u8
recordingConfigurationArn: ''
tags: {}
type: STANDARD
streamKey:
arn: arn:aws:ivs:us-east-1:XXXXXXXXXX:stream-key/9QR0UjQPjtTK
channelArn: arn:aws:ivs:us-east-1:XXXXXXXXX:channel/3ruIGo4qyo8t
tags: {}
value: sk_us-east-1_9QR0UjQPjtTK_8lx6Yy8mJfAUA0VP5fVHwAXL6cYl8u
Now I need to fetch the ARN and later use it as variable
resource "null_resource" "channelArn" {
provisioner "local-exec" {
command = "export TF_VAR_channelArn=$(sed -n 's/channelArn: //p' test-channel.txt)"
interpreter = ["bash", "-c"]
}
}
After terraform apply I can see that my command is successfully executed but when I echo $TF_VAR_channelArn it doesnt show anything. How can I get the channelArn from the local file and use it as aws_ssm_parameter?

how use aws profile when using ansible ec2.py module

I wrote a quick ansible playbook to launch a simple ec2 instance but I think I have an issue on how I want to authenticate.
What I don't want to do is set my aws access/secret keys as env variables since they expire each hour and I need to regenerate the ~/.aws/credentials file via a script.
Right now, my ansible playbook looks like this:
--- # Launch ec2
- name: Create ec2 instance
hosts: local
connection: local
gather_facts: false
vars:
profile: profile_xxxx
key_pair: usrxxx
region: us-east-1
subnet: subnet-38xxxxx
security_groups: ['sg-e54xxxx', 'sg-bfcxxxx', 'sg-a9dxxx']
image: ami-031xxx
instance_type: t2.small
num_instances: 1
tag_name: ansibletest
hdd_volumes:
- device_name: /dev/sdf
volume_size: 50
delete_on_termination: true
- device_name: /dev/sdh
volume_size: 50
delete_on_termination: true
tasks:
- name: launch ec2
ec2:
count: 1
key_name: "{{ key_pair }}"
profile: "{{ profile }}"
group_id: "{{ security_groups }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
assign_public_ip: false
volumes: "{{ hdd_volumes }}"
instance_tags:
Name: "{{ tag_name }}"
ASV: "{{ tag_asv }}"
CMDBEnvironment: "{{ tag_cmdbEnv }}"
EID: "{{ tag_eid }}"
OwnerContact: "{{ tag_eid }}"
register: ec2
- name: print ec2 vars
debug: var=ec
my hosts file is this:
[local]
localhost ansible_python_interpreter=/usr/local/bin/python2.7
I run my playbook like this:
ansible-playbook -i hosts launchec2.yml -vvv
and then get this back:
PLAYBOOK: launchec2.yml ********************************************************
1 plays in launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [launch ec2] **************************************************************
task path: /Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.yml:27
Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: usrxxx
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730 `" && echo ansible-tmp-1485527483.82-106272618422730="` echo ~/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730 `" ) && sleep 0'
<localhost> PUT /var/folders/cx/_fdv7nkn6dz21798p_bn9dp9ln9sqc/T/tmpnk2rh5 TO /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py
<localhost> PUT /var/folders/cx/_fdv7nkn6dz21798p_bn9dp9ln9sqc/T/tmpEpwenH TO /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args
<localhost> EXEC /bin/sh -c 'chmod u+x /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/env python /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args; rm -rf "/Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ec2"
},
"module_stderr": "usage: ec2.py [-h] [--list] [--host HOST] [--refresh-cache]\n [--profile BOTO_PROFILE]\nec2.py: error: unrecognized arguments: /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #/Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
I noticed in the ec2.py file it says this:
NOTE: This script assumes Ansible is being executed where the environment
variables needed for Boto have already been set:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
This script also assumes there is an ec2.ini file alongside it. To specify a
different path to ec2.ini, define the EC2_INI_PATH environment variable:
export EC2_INI_PATH=/path/to/my_ec2.ini
If you're using eucalyptus you need to set the above variables and
you need to define:
export EC2_URL=http://hostname_of_your_cc:port/services/Eucalyptus
If you're using boto profiles (requires boto>=2.24.0) you can choose a profile
using the --boto-profile command line argument (e.g. ec2.py --boto-profile prod) or using
the AWS_PROFILE variable:
AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml
so I ran it like this:
AWS_PROFILE=profile_xxxx ansible-playbook -i hosts launchec2.yml -vvv
but still got the same results...
----EDIT-----
I also ran it like this:
export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=<your aws access key here>
export AWS_SECRET_KEY=<your aws secret key here>
ansible-playbook -i hosts launchec2.yml
but still got this back...still seems to be a credentials issue?
usrxxx$ ansible-playbook -i hosts launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [launch ec2] **************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "usage: ec2.py [-h] [--list] [--host HOST] [--refresh-cache]\n [--profile BOTO_PROFILE]\nec2.py: error: unrecognized arguments: /Users/usrxxx/.ansible/tmp/ansible-tmp-1485531356.01-33528208838066/args\n", "module_stdout": "", "msg": "MODULE FAILURE"}
to retry, use: --limit #/Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
---EDIT 2------
Completely removed ansible and then installed with homebrew but got the same error....so I think went to the directory that its looking for ec2.py (Using module file /usr/local/Cellar/ansible/2.2.1.0/libexec/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py) and replaced that ec2.py with this one...https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py....but now get this error:
Using /Users/usrxxx/ansible/ansible.cfg as config file
PLAYBOOK: launchec2.yml ********************************************************
1 plays in launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [aws : launch ec2] ********************************************************
task path: /Users/usrxxx/Desktop/cloud-jumper/Ansible/roles/aws/tasks/main.yml:1
Using module file /usr/local/Cellar/ansible/2.2.1.0/libexec/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py
fatal: [localhost]: FAILED! => {
"failed": true,
"msg": "module (ec2) is missing interpreter line"
}
Seems you have placed ec2.py inventory script into your /path/to/playbook/library/ folder.
You should not put dynamic inventory scripts there – this way Ansible runs inventory script instead of ec2 module.
Remove ec2.py from your project's library folder (or Ansible global library defined in ansible.cfg) and try again.

Deploying cloudformation stack to AWS using Ansible Tower

I'm new to Ansible, Ansible Tower, and AWS Cloud Formation and am trying to have Ansible Tower deploy an EC2 Container Service using a Cloud Formation template. I try to run the deploy job and am running into this error below.
TASK [create/update stack] *****************************************************
task path: /var/lib/awx/projects/_6__api/tasks/create_stack.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: awx
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790 `" && echo ansible-tmp-1470427494.79-207756006727790="` echo $HOME/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpgAsKKv TO /var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/cloudformation
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-coqlkeqywlqhagfixtfpfotjgknremaw; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 AWS_DEFAULT_REGION=us-west-2 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/cloudformation; rm -rf "/var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/" > /dev/null 2>&1'"'"' && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "cloudformation"}, "module_stderr": "/bin/sh: /usr/bin/sudo: Permission denied\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
This is the create/update task:
---
- name: create/update stack
cloudformation:
stack_name: my-stack
state: present
template: templates/stack.yml
template_format: yaml
template_parameters:
VpcId: "{{ vpc_id }}"
SubnetId: "{{ subnet_id }}"
KeyPair: "{{ ec2_keypair }}"
DbUsername: "{{ db_username }}"
DbPassword: "{{ db_password }}"
InstanceCount: "{{ instance_count | default(1) }}"
tags:
Environment: test
register: cf_stack
- debug: msg={{ cf_stack }}
when: debug is defined
The playbook that Ansible Tower executes is a site.yml file:
---
- name: Deployment Playbook
hosts: localhost
connection: local
gather_facts: no
environment:
AWS_DEFAULT_REGION: "{{ lookup('env', 'AWS_DEFAULT_REGION') | default('us-west-2', true) }}"
tasks:
- include: tasks/create_stack.yml
- include: tasks/deploy_app.yml
This is what my playbook folder structure looks like:
/deploy
/group_vars
all
/library
aws_ecs_service.py
aws_ecs_task.py
aws_ecs_taskdefinition.py
/tasks
stack.yml
/templates
site.yml
I'm basing everything really on Justin Menga's pluralsight course "Continuous Delivery using Docker and Ansible", but he uses Jenkins, not Ansible Tower, which is probably why the disconnect. Anyway, hopefully that is enough information, let me know if I should also provide the stack.yml file. The files under the library directory are Menga's customized modules from his video course.
Thanks for reading all this and for any potential help! This is a link to his deploy playbook repository that I closely modeled everything after, https://github.com/jmenga/todobackend-deploy. Things that I took out are the DB RDS stuff.
If you look at the two last lines of the error message you can see that it is attempting to escalate privileges but failing:
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-coqlkeqywlqhagfixtfpfotjgknremaw; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 AWS_DEFAULT_REGION=us-west-2 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/cloudformation; rm -rf "/var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/" > /dev/null 2>&1'"'"' && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "cloudformation"}, "module_stderr": "/bin/sh: /usr/bin/sudo: Permission denied\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
As this is a local task it is attempting to switch to the root user on the box that Ansible Tower is running on and the user presumably (and for good reason) doesn't have the privileges to do this.
With normal Ansible you can avoid this by not specifying the --become or -b flags on the command line or by specifying become: false in the task/play definition.
As you pointed out in the comments, with Ansible Tower it's a case of unticking the "Enable Privilege Escalation" option in the job template.

ansible local_action returns error "invalid output was: [sudo via ansible, key=xxx] password:"

I've being trying to run this ansible playbook to get a AWS resource tags:
- name: list resource tags
local_action: ec2_tag resource=i-abcdefg region=us-east-1 state=list
register: result
And this error is returned:
failed: [ec2-11-222-333-444.compute-1.amazonaws.com] => {"failed":
true, "parsed": false} invalid output was: [sudo via ansible,
key=heoqwlqnhxlxyzwnxmtbvmdtvmvjbsux] password:
FATAL: all hosts have already failed -- aborting
How can I fix that
You cannot run this local_action as root. Change your task to be:
- name: list resource tags
sudo: false
local_action: ec2_tag resource=i-abcdefg region=us-east-1 state=list
register: result