Cloudfront facts using Ansible - amazon-web-services

I need to retrieve the DNS name of my Cloudfront instance (eg. 1234567890abcd.cloudfront.net) and was wondering if there is a quick way to get this in Ansible without resorting to the AWS CLI.
From gleaming the Extra Modules source it would appear there is not a module for this. How are other people getting this attribute?

You can either write your own module or you can write a filter plugin in a few lines and accomplish the same thing.
Example of writing a filter in Ansible. Lets name this file aws.py in your filter_plugins/aws.py
import boto3
import botocore
from ansible import errors
def get_cloudfront_dns(region, dist_id):
""" Return the dns name of the cloudfront distribution id.
Args:
region (str): The AWS region.
dist_id (str): distribution id
Basic Usage:
>>> get_cloudfront_dns('us-west-2', 'E123456LHXOD5FK')
'1234567890abcd.cloudfront.net'
"""
client = boto3.client('cloudfront', region)
domain_name = None
try:
domain_name = (
client
.get_distribution(Id=dist_id)['Distribution']['DomainName']
)
except Exception as e:
if isinstance(e, botocore.exceptions.ClientError):
raise e
else:
raise errors.AnsibleFilterError(
'Could not retreive the dns name for CloudFront Dist ID {0}: {1}'.format(dist_id, str(e))
)
return domain_name
class FilterModule(object):
''' Ansible core jinja2 filters '''
def filters(self):
return {'get_cloudfront_dns': get_cloudfront_dns,}
In order to use this plugin, you just need to call it.
dns_entry: "{{ 'us-west-2' | get_cloudfront_dns('123434JHJHJH') }}"
Keep in mind, you will need boto3 and botocore installed, in order to use this this plugin.
I have a bunch of examples located in my repo linuxdynasty ld-ansible-filters repo

I ended up writing a module for this (cloudfront_facts.py) that has been accepted into Ansible 2.3.0.

Related

Airflow SSHOperator: How To Securely Access Pem File Across Tasks?

We are running Airflow via AWS's managed MWAA Offering. As part of their offering they include a tutorial on securely using the SSH Operator in conjunction with AWS Secrets Manager. The gist of how their solution works is described below:
Run a Task that fetches the pem file from a Secrets Manager location and store it on the filesystem at /tmp/mypem.pem.
In the SSH Connection include the extra information that specifies the file location
{"key_file":"/tmp/mypem.pem"}
Use the SSH Connection in the SSHOperator.
In short the workflow is supposed to be:
Task1 gets the pem -> Task2 uses the pem via the SSHOperator
All of this is great in theory, but it doesn't actually work. It doesn't work because Task1 may run on a different node from Task2, which means Task2 can't access the /tmp/mypem.pem file location that Task1 wrote the file to. AWS is aware of this limitation according to AWS Support, but now we need to understand another way to do this.
Question
How can we securely store and access a pem file that can then be used by Tasks running on different nodes via the SSHOperator?
I ran into the same problem. I extended the SSHOperator to do both steps in one call.
In AWS Secrets Manager, two keys are added for airflow to retrieve on execution.
{variables_prefix}/airflow-user-ssh-key : the value of the private key
{connections_prefix}/ssh_airflow_user : ssh://replace.user#replace.remote.host?key_file=%2Ftmp%2Fairflow-user-ssh-key
from typing import Optional, Sequence
from os.path import basename, splitext
from airflow.models import Variable
from airflow.providers.ssh.operators.ssh import SSHOperator
from airflow.providers.ssh.hooks.ssh import SSHHook
class SSHOperator(SSHOperator):
"""
SSHOperator to execute commands on given remote host using the ssh_hook.
:param ssh_conn_id: :ref:`ssh connection id<howto/connection:ssh>`
from airflow Connections.
:param ssh_key_var: name of Variable holding private key.
Creates "/tmp/{variable_name}.pem" to use in SSH connection.
May also be inferred from "key_file" in "extras" in "ssh_conn_id".
:param remote_host: remote host to connect (templated)
Nullable. If provided, it will replace the `remote_host` which was
defined in `ssh_hook` or predefined in the connection of `ssh_conn_id`.
:param command: command to execute on remote host. (templated)
:param timeout: (deprecated) timeout (in seconds) for executing the command. The default is 10 seconds.
Use conn_timeout and cmd_timeout parameters instead.
:param environment: a dict of shell environment variables. Note that the
server will reject them silently if `AcceptEnv` is not set in SSH config.
:param get_pty: request a pseudo-terminal from the server. Set to ``True``
to have the remote process killed upon task timeout.
The default is ``False`` but note that `get_pty` is forced to ``True``
when the `command` starts with ``sudo``.
"""
template_fields: Sequence[str] = ("command", "remote_host")
template_ext: Sequence[str] = (".sh",)
template_fields_renderers = {"command": "bash"}
def __init__(
self,
*,
ssh_conn_id: Optional[str] = None,
ssh_key_var: Optional[str] = None,
remote_host: Optional[str] = None,
command: Optional[str] = None,
timeout: Optional[int] = None,
environment: Optional[dict] = None,
get_pty: bool = False,
**kwargs,
) -> None:
super().__init__(
ssh_conn_id=ssh_conn_id,
remote_host=remote_host,
command=command,
timeout=timeout,
environment=environment,
get_pty=get_pty,
**kwargs,
)
if ssh_key_var is None:
key_file = SSHHook(ssh_conn_id=self.ssh_conn_id).key_file
key_filename = basename(key_file)
key_filename_no_extension = splitext(key_filename)[0]
self.ssh_key_var = key_filename_no_extension
else:
self.ssh_key_var = ssh_key_var
def import_ssh_key(self):
with open(f"/tmp/{self.ssh_key_var}", "w") as file:
file.write(Variable.get(self.ssh_key_var))
def execute(self, context):
self.import_ssh_key()
super().execute(context)
The answer by holly is good. I am sharing a different way I solved this problem. I used the strategy of converting the SSH Connection into a URI and then input that into Secrets Manager under the expected connections path, and everything worked great via the SSH Operator. Below are the general steps I took.
Generate an encoded URI
import json
from airflow.models.connection import Connection
from pathlib import Path
pem = Path(“/my/pem/file”/pem).read_text()
myconn= Connection(
conn_id="connX”,
conn_type="ssh",
host="10.x.y.z,
login=“mylogin”,
extra=json.dumps(dict(private_key=pem)),
print(myconn.get_uri())
Input that URI under the environment's configured path in Secrets Manager. The important note here is to input the value in the plaintext field without including a key. Example:
airflow/connections/connX and under Plaintext only include the URI value
Now in the SSHOperator you can reference this connection Id like any other.
remote_task = SSHOperator(
task_id="ssh_and_execute_command",
ssh_conn_id="connX"
command="whoami",
)

How can I combine methods from the EC2 resource and client API?

I'm trying to take in input for stopping and starting instances, but if I use client, it comes up with the error:
'EC2' has no attribute 'instance'
and if I use resource, it says
'ec2.Serviceresource' has no attribute 'Instance'
Is it possible to use both?
#!/usr/bin/env python3
import boto3
import botocore
import sys
print('Enter Instance id: ')
instanceIdIn=input()
ec2=boto3.resource('ec2')
ec2.Instance(instanceIdIn).stop()
stopwait=ec2.get_waiter('instance_stopped')
try:
stopwait.wait(instanceIdIn)
print('Instance Stopped. Starting Instance again.')
except botocore.exceptions.waitError as wex:
logger.error('instance not stopped')
ec2.Instance(instanceIdIn).start()
try:
logger.info('waiting for running state...')
print('Instance Running.')
except botocore.exceptions.waitError as wex2:
logger.error('instance has not been stopped')
In boto3 there's two kinds of APIs for most service - a resource-based API, which is supposed to be an abstraction of the lower level API calls that the client API provides.
You can't directly mix and match calls to these. Instead you should create a separate instance for each of those like this:
import boto3
ec2_resource = boto3.resource("ec2")
ec2_client = boto3.client("ec2")
# Now you can call the client methods on the client
# and resource classes from the resource:
my_instance = ec2_resource.Instance("instance-id")
my_waiter = ec2_client.get_waiter("instance_stopped")

Why is my lambda not able to talk to elasticache?

I have an Redis ElastiCache cluster that has a FQDN for the primary node in the format: master.clustername.x.euw1.cache.amazonaws.com. I also have a Route53 record with the CNAME pointing at that FQDN.
I have a .net core lambda in the same VPC as the cluster, with access to the cluster via security groups. The lambda talks to the cluster using the Redis library developed by Stack Overflow (Github repo here for reference).
If I give the lambda the hostname the FQDN for the Redis cluster (the one that starts with master) I can connect, save data and read it.
If I give the lambda the CNAME (and that CNAME gives the same IP address as the FQDN when I ping it from my local machine and also if I use Dns.GetHostEntry within the lambda) it doesn't connect and I get the following error message:
One or more errors occurred. (It was not possible to connect to the redis server(s); to create a disconnected multiplexer, disable AbortOnConnectFail. SocketFailure on PING): AggregateException
at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
at lambda_method(Closure , Stream , Stream , LambdaContextInternal )
at StackExchange.Redis.ConnectionMultiplexer.ConnectImpl(Func`1 multiplexerFactory, TextWriter log) in c:\code\StackExchange.Redis\StackExchange.Redis\StackExchange\Redis\ConnectionMultiplexer.cs:line 890
at lambda.Redis.RedisClientBuilder.Build(String redisHost, String redisPort, Int32 redisDbId) in C:\BuildAgent\work\91d24911506461d0\src\Lambda\Redis\RedisClientBuilder.cs:line 9
at lambda.Ioc.ServiceBuilder.GetRedisClient() in C:\BuildAgent\work\91d24911506461d0\src\Lambda\IoC\ServiceBuilder.cs:line 18
at lambda.Ioc.ServiceBuilder.GetServices() in C:\BuildAgent\work\91d24911506461d0\src\Lambda\IoC\ServiceBuilder.cs:line 11
at Handlers.OrderHandler.Run(SNSEvent request, ILambdaContext context) in C:\BuildAgent\work\91d24911506461d0\src\Lambda\Handlers\OrderHandler.cs:line 26
Has anyone seen anything similar to this?
It turned out that because I was using an SSL certificate on the elasticache cluster and the SSL certificate was bound the the master. endpoint whereas I was trying to connect to the CNAME, the certificate validation was failing.
So I ended up querying the Route53 record within the code to get the master endpoint and it worked.
Possible workaround to isolate problems from your client lib -- follow AWS' tutorial and rewrite your Lambda to something like the code below (example in Python).
from __future__ import print_function
import time
import uuid
import sys
import socket
import elasticache_auto_discovery
from pymemcache.client.hash import HashClient
#elasticache settings
elasticache_config_endpoint = "your-elasticache-cluster-endpoint:port"
nodes = elasticache_auto_discovery.discover(elasticache_config_endpoint)
nodes = map(lambda x: (x[1], int(x[2])), nodes)
memcache_client = HashClient(nodes)
def handler(event, context):
"""
This function puts into memcache and get from it.
Memcache is hosted using elasticache
"""
#Create a random UUID... this will the sample element we add to the cache.
uuid_inserted = uuid.uuid4().hex
#Put the UUID to the cache.
memcache_client.set('uuid', uuid_inserted)
#Get item (UUID) from the cache.
uuid_obtained = memcache_client.get('uuid')
if uuid_obtained.decode("utf-8") == uuid_inserted:
# this print should go to the CloudWatch Logs and Lambda console.
print ("Success: Fetched value %s from memcache" %(uuid_inserted))
else:
raise Exception("Value is not the same as we put :(. Expected %s got %s" %(uuid_inserted, uuid_obtained))
return "Fetched value from memcache: " + uuid_obtained.decode("utf-8")
Reference: https://docs.aws.amazon.com/lambda/latest/dg/vpc-ec-deployment-pkg.html

Boto: how to get security group by id?

I`m trying to get the security group by group id.
Here is the code:
#!/usr/bin/env python2.7
import boto.ec2
import argparse
parser = argparse.ArgumentParser(description="")
parser.add_argument('sec_group_id', help='Security group id')
parser.add_argument('region_name', help='Region name')
args = parser.parse_args()
sec_group_id = args.sec_group_id
region_name = args.region_name
conn = boto.ec2.connect_to_region(region_name);
GivenSecGroup=conn.get_all_security_groups(sec_group_id)
When I execute this:
./sec_groups.py sg-45b9a12c eu-central-1
I get the output:
Traceback (most recent call last):
File "./sec_groups.py", line 22, in <module>
GivenSecGroup=conn.get_all_security_groups(sec_group_id)
File "/usr/lib/python2.7/dist-packages/boto/ec2/connection.py", line 2969, in get_all_security_groups
[('item', SecurityGroup)], verb='POST')
File "/usr/lib/python2.7/dist-packages/boto/connection.py", line 1182, in get_list
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>InvalidGroup.NotFound</Code><Message>The security group 'sg-45b9a12c' does not exist in default VPC 'vpc-d289c0bb'</Message></Error></Errors><RequestID>edf2afd0-f552-4bdf-938e-1bccef798145</RequestID></Response>
So basically it says “The security group 'sg-45b9a12c' does not exist in default VPC 'vpc-d289c0bb'”
But this security group does exist in default VPC! Here is the prove:
AWS console screenshot
How can I make this work?
I would be grateful for your answer.
Short answer:
just change
GivenSecGroup=conn.get_all_security_groups(sec_group_id)
to
GivenSecGroup=conn.get_all_security_groups(group_ids=[sec_group_id])
Long Answer:
get_all_security_groups first argument is a list of security group names and the second is the list of ids:
def get_all_security_groups(self, groupnames=None, group_ids=None,
filters=None, dry_run=False):
"""
Get all security groups associated with your account in a region.
:type groupnames: list
:param groupnames: A list of the names of security groups to retrieve.
If not provided, all security groups will be
returned.
:type group_ids: list
:param group_ids: A list of IDs of security groups to retrieve for
security groups within a VPC.
I will show alternative boto3 answer beside #Vor.
IMHO, you should switch to boto3, the developer has make it clear that boto will not support new features. You don't need to specify region, you can tied the region inside credential file,etc.
import boto3
import argparse
ec2=boto3.client("ec2")
parser = argparse.ArgumentParser(description="")
parser.add_argument('sec_group_id', help='Security group id')
args = parser.parse_args()
sec_group_id = args.sec_group_id
my_sec_grp = ec2.describe_security_groups(GroupIds = [sec_group_id])
Boto3 are closely tied with AWS Cli. The current AWS cli has show features such "--query" that allow user to filter the results return. If AWS implement that features, that will be in boto3, not boto.

django boto3: NoCredentialsError -- Unable to locate credentials

I am trying to use boto3 in my django project to upload files to Amazon S3. Credentials are defined in settings.py:
AWS_ACCESS_KEY = xxxxxxxx
AWS_SECRET_KEY = xxxxxxxx
S3_BUCKET = xxxxxxx
In views.py:
import boto3
s3 = boto3.client('s3')
path = os.path.dirname(os.path.realpath(__file__))
s3.upload_file(path+'/myphoto.png', S3_BUCKET, 'myphoto.png')
The system complains about Unable to locate credentials. I have two questions:
(a) It seems that I am supposed to create a credential file ~/.aws/credentials. But in a django project, where do I have to put it?
(b) The s3 method upload_file takes a file path/name as its first argument. Is it possible that I provide a file stream obtained by a form input element <input type="file" name="fileToUpload">?
This is what I use for a direct upload, i hope it provides some assistance.
import boto
from boto.exception import S3CreateError
from boto.s3.connection import S3Connection
conn = S3Connection(settings.AWS_ACCESS_KEY,
settings.AWS_SECRET_KEY,
is_secure=True)
try:
bucket = conn.create_bucket(settings.S3_BUCKET)
except S3CreateError as e:
bucket = conn.get_bucket(settings.S3_BUCKET)
k = boto.s3.key.Key(bucket)
k.key = filename
k.set_contents_from_filename(filepath)
Not sure about (a) but django is very flexible with file management.
Regarding (b) you can also sign the upload and do it directly from the client to reduce bandwidth usage, its quite sneaky and secure too. You need to use some JavaScript to manage the upload. If you want details I can include them here.