change ElastiCache node DNS record in cloud formation template - amazon-web-services

I need to create CNAME record for ElastiCache Cluster. However, I build redis cluster and there is only one node. As far as I found there is no
ConfigurationEndpoint.Address for redis cluster. Is there any chance to change DNS name for node in cluster and how to do it?
Currently template looks like:
"ElastiCahceDNSRecord" : {
"Type" : "AWS::Route53::RecordSetGroup",
"Properties" : {
"HostedZoneName" : "example.com.",
"Comment" : "Targered to ElastiCache",
"RecordSets" : [{
"Name" : "elche01.example.com.",
"Type" : "CNAME",
"TTL" : "300",
"ResourceRecords" : [
{
"Fn::GetAtt": [ "myelasticache", "ConfigurationEndpoint.Address" ]
}
]
}]
}
}

For folks coming to this page for a solution. There is now a way to get the Redis endpoint directly from within the CFN.
There is now the ability to get the RedisEndpoint.Address from the AWS::ElastiCache::CacheCluster or PrimaryEndPoint.Address from the AWS::ElastiCache::ReplicationGroup
Per the documentation (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticache-cache-cluster.html):
RedisEndpoint.Address - The DNS address of the configuration endpoint for the Redis cache cluster.
RedisEndpoint.Port - The port number of the configuration endpoint for the Redis cache cluster.
or
Per the documentation (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html):
PrimaryEndPoint.Address -
The DNS address of the primary read-write cache node.
PrimaryEndPoint.Port -
The number of the port that the primary read-write cache engine is listening on.
An example CFN (other bits not included):
Resources:
DnsRedis:
Type: 'AWS::Route53::RecordSetGroup'
Properties:
HostedZoneName: 'a.hosted.zone.name.'
RecordSets:
- Name: 'a.record.set.name'
Type: CNAME
TTL: '300'
ResourceRecords:
- !GetAtt
- RedisCacheCluster
- RedisEndpoint.Address
DependsOn: RedisCacheCluster
RedisCacheCluster:
Type: 'AWS::ElastiCache::CacheCluster'
Properties:
ClusterName: cluster-name-redis
AutoMinorVersionUpgrade: 'true'
AZMode: single-az
CacheNodeType: cache.t2.small
Engine: redis
EngineVersion: 3.2.4
NumCacheNodes: 1
CacheSubnetGroupName: !Ref ElastiCacheSubnetGroupId
VpcSecurityGroupIds:
- !GetAtt
- elasticacheSecGrp
- GroupId

Looks like the ConfigurationEndpoint.Address is only supported for Memcached clusters, not for Redis. Please see this relevant discussion in the AWS forums.
Also, the AWS Auto Discovery docs (still) state:
Note
Auto Discovery is only available for cache clusters running the
Memcached engine. Redis cache clusters are single node clusters, thus
there is no need to identify and track all the nodes in a Redis
cluster.
Looks like your 'best' solution is to query the individual endpoint(s) in us, in order to determine the addresses to connect to, using AWS::CloudFormation::Init as is suggested on the AWS forums thread.
UPDATE
As #slimdrive pointed out below, this IS now possible, through the AWS::ElastiCache::CacheCluster. Please read further below for more details.

You should be able to use PrimaryEndPoint.Address instead of ConfigurationEndpoint.Address in the template provided to get the DNS address of the primary read-write cache node as documented on the AWS::ElastiCache::ReplicationGroup page.

This can be extremely confusing-- depending on what you're trying to do, you use either ConfigurationEndpoint or PrimaryEndpoint... I'm adding my findings here as this was one of the first posts I found when trying to search. I'll also detail some other issues I've had with ElastiCache redis engine setup with CloudFormation. I was trying to set up a CloudFormation type of AWS::ElastiCache::ReplicationGroup
Let me preface this with the fact that I had previously set up a clustered instance of redis ElastiCache using a t2.micro build type with no problems. In fact, I received an error from the node-redis npm package saying that clusters weren't supported, so I also implemented the redis-clustr wrapper around that. Anyway, all that was working fine.
We then moved forward with trying to create a CloudFormation template for this, and I ran into all sorts of limitations that the aws console UI must be hiding from people. In chronological order of how I ran into the problems, here were my struggles:
t2.micro instances are not supported with auto-failover. So I set AutomaticFailoverEnabled to false.
Fix: t2.micro instances actually can use auto-failover. Use the Parameter Group that has clustered mode enabled. The default one for me was default.redis3.2.cluster.on (I used version 3.2.6, as this is the most current that supports encryption at rest and in transit). The parameter group can not be changed after the instance is created, so don't forget this part.
We received an error from the redis-clustr/node-redis package: this instance has cluster support disabled.
(This is how I found the parameter group needed the value on)
We received an error in the CF template that cluster mode can not be used if auto failure is off
This is what made me try using a t2.micro instance again, since I knew I had auto-failover turned on in my other instance and was using a t2.micro instance. Sure enough, this combination does work together.
I had outputs to the stack and creation of parameters in the Parameter Store of the connection url and port. This failed with x attribute/property does not exist on the ReplicationGroup.
Fix: It turns out that if cluster mode is disabled (using parameter group default.redis3.2, for example), you must use the PrimaryEndPoint.Address and PrimaryEndPoint.Port values. If cluster mode is enabled, use ConfigurationEndPoint.Address and ConfigurationEndPoint.Port. I had tried using the RedisEndpoint.Address and RedisEndpoint.Port with no luck, though this may work with a single redis node with no replica (I also could have had the casing wrong-- see the note below).
NOTE
Also, a major issue affected me is the casing: The P in EndPoint must be capitalized in the PrimaryEndPoint and ConfigurationEndPoint variations if you are creating a AWS::ElastiCache::ReplicationGroup, but the p is lower case if you are creating a AWS::ElastiCache::CacheCluster: RedisEndpoint, ConfigurationEndpoint. I'm not sure why there's a discrepancy there, but it may be the cause of some problems.
Link to AWS docs for GetAtt, which lists available attributes for different CloudFormation resources

Related

How to create and verify a cross region public certificate through CloudFormation?

I'm attempting to achieve the following through CloudFormation.
From a stack created in EU region I want to create (and verify) a public certificate against Route53 in US-EAST-1 due to using Cloudfront. Aiming to have zero actions performed in the console or AWS CLI.
The new CloudFormation support for ACM was a little sketchy last week but seems to be working now.
Certifcate
Resources:
Certificate:
Type: AWS::CertificateManager::Certificate
Properties:
DomainName: !Sub "${Env}.domain.cloud"
ValidationMethod: DNS
DomainValidationOptions:
-
DomainName: !Sub "${Env}.domain.cloud"
HostedZoneId: !Ref HostedZoneId
All I need to do is use Cloudformation to deploy this into the US-EAST-1 region from stack in a different region. Everything else is ready for this.
I thought that using Codepipeline's cross region support would be great so I started to look into [this documentation][1] after getting setting things up in my template I met the following error message...
An error occurred while validating the artifact bucket {...} The bucket named is not located in the `us-east-1` AWS region.
To me this makes no sense as it seems that you already need at least a couple of resources to exist in target region for it to work. Cart before the horse kind of behavior. To test this I create an artifact bucket in the target region by hand and things worked fine, but requires using CLI or the console when I'm aiming for a CloudFormation based solution.
Note: I'm running out of time to write this so I'll update it when I can in a few hours time. any help before I can do that would be great though
Sadly, that's required for cross-region CodePipeline. From docs:
When you create or edit a pipeline, you must have an artifact bucket in the pipeline Region and then you must have one artifact bucket per Region where you plan to execute an action.
If you want to fully automate this through CloudFormation, you either have to use custom resource to create buckets in all the regions in advance or look at stack sets to deploy one template bucket in multiple regions.
p.s.
Your link does not work, thus I'm not sure if you refer to the same documentation page.

Is it possible to execute commands and then update security groups in a CloudFormation template?

I would like to perform the following operations in order with CloudFormation.
Start up an EC2 instance.
Give it privileges to access the full internet using security group A.
Download particular versions of Java and Python
Remove its internet privileges by removing security group A and adding a security group B.
I observe that there is a DependsOn attribute for specifying the order in which to create resources, but I was unable to find a feature that would allow me to update the security groups on the same EC2 instance twice over the course of creating a stack.
Is this possible with CloudFormation?
Not in CloudFormation natively, but you could launch the EC2 instance with a configured userdata script that itself downloads Java/Python and the awscli, as necessary, and then uses the awscli to switch security groups for the current EC2 instance.
However, if all you need is Java and Python pre-loaded then why not simply create an AMI with them already installed and launch from that AMI?
The best way out is to utilise a Cloudformation custom resource here. You can create a lambda function that does exactly what you need. This lambda function can then be called as a custom resource function in the cloud formation template.
You can pass your new security group ID and instance ID to the lambda function and code the lambda function to use AWS SDK and do the modifications that you need.
I have leveraged it to post an update to my web server about the progress of the cloud formation template. Below is the sample code of the template.
EC2InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles: [!Ref 'EC2Role']
MarkInstanceProfileComplete:
Type: 'Custom::EC2InstanceProfileDone'
Version: '1.0'
DependsOn: EC2InstanceProfile
Properties:
ServiceToken: !Ref CustomResourceArn
HostURL: !Ref Host
LoginType: !Ref LoginType
SecretId: !Ref SecretId
WorkspaceId: !Ref WorkspaceId
Event: 2
Total: 3
Here the resource MarkInstanceProfileComplete is a custom resource that calls a Lambda function. It takes the event count and total count as input and processes them to calculate percentage progress. Based on that it sends out a request to my web server. For all we care, this Lambda function can do potentially anything you want it to do.

How to insert AWS resource IDs into application configuration files

I'm following the AWS guide for deploying an HA Wordpress site to Elastic Beanstalk which includes using the eb-php-wordpress extension. The process requires editing a couple of configuration files with known resource IDs prior to deploying the application.
In particular, the instructions say to edit the efs-create.config file with a VPC ID, and Subnet IDs. The file, among other things, helps set the OptionSettings property of the AWS::ElasticBeanstalk::Environment resource. For this reason, I suspect I should just be able to reference it with Ref:. Is this correct, though since the VPC would be created by another file and the EB environment Cloudformation stack is created next to the VPC stack rather than "inside" it? Would I have to use a Fn:: call to get the information?
The section of the configuration file I'm working with looks like this:
option_settings:
aws:elasticbeanstalk:customoption:
EFSVolumeName: "EB-EFS-Volume"
VPCId: "vpc-XXXXXXXX"
## Subnet Options
SubnetA: "subnet-XXXXXXXX"
SubnetB: "subnet-XXXXXXXX"
SubnetC: "subnet-XXXXXXXX"
SubnetD: "subnet-XXXXXXXX"
Would the VPCId line be something like
VPCId: {Ref: VPC}
Where VPC is the name of the VPC resource that I've created? Or, more simply, how would I reference the VPC ID of the default VPC if I stick with that?
You should be able to use Ref to get the various IDs of the elastic beanstalk named resources, according to the docs. However, the VPC is not one of these named resources (ie those with a logical ID), but is a property of one of the named resources, in this case, the logical ID is AWSEBSecurityGroup and the property is VpcId so you should be able to get it instead using GetAtt:
{ "Fn::GetAtt" : [ "AWSEBSecurityGroup", "VpcId" ] }
from the functions docs and the CloudFormation docs
A similar approach should also work for the subnets.

AWS Elastic Beanstalk - can I set the name of the scaling group?

I can set the name of the LoadBalancer that EB defines with the following ebextension configuration:
Resources:
AWSEBLoadBalancer:
Type: "AWS::ElasticLoadBalancing::LoadBalancer"
Properties:
"LoadBalancerName":
"Fn::Join":
- ""
-
- {Ref: AWSEBEnvironmentName}
- "-elb"
This lets me address the loadbalancer by name in scripts, and also makes the loadbalancer list in the EC2 console much easier read (can just scan for the ELB you're interested in instead of having to dig through tags).
I'd like to do similar for the autoscaling group, is there any way to do this?
Edit: I've decided not to manipulate the ASG directly (should be done through EB settings, not by manipulating the ASG directly, to avoid inconsistency). But I would still like to rename the ASG, just to make the console view cleaner.
This can be done with { Ref: AWSEBAutoScalingGroup }.
A complete list of all supported options is available here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-format-resources-eb.html

Unreliable discovery for elasticsearch nodes on ec2

I'm using elasticsearch (0.90.x) with the cloud-aws plugin. Sometimes nodes running on different machines aren't able to discover each other ("waited for 30s and no initial state was set by the discovery"). I've set "discovery.ec2.ping_timeout" to "15s", but this doesn't seem to help. Are there other settings that might make a difference?
discovery:
type: ec2
ec2:
ping_timeout: 15s
Not sure if you are aware of this blog post: http://www.elasticsearch.org/tutorials/elasticsearch-on-ec2/. It explains the plugin settings in depth.
Adding cluster name, like so
cluster.name: your_cluster_name
discovery:
type: ec2
...
might help.