Consul acl_agent_token setup on bootstrap - amazon-web-services

I'm attempting to setup a Consul 1.0 cluster in ECS using Terraform. I am able to get Consul up and running as a cluster, but I am running into ACL errors, as documented here. The problem I am having is running the associated curl scripts to create a token with the proper rules, saving that outputted token, and running it on every member of the autoscale group both for the first time and every time the group scales up.
Does anyone have any suggestions on how to get this knocked out?

So what I ended up doing was creating a lambda script to handle 2 types of events: bootstrap and adding new nodes, which is triggered by either a local_exec in TF (bootstrap) or an autoscaling group sns notification (add new node). The bootstrap function stored the acl_agent_token in an SSM Parameter Store and applied it initially to the members of the cluster. The function that adds new nodes queries the parameter store and adds the node via the rest api.

One way of implementing this is storing the created token in Vault or S3 (encrypted with KMS) add a few lines to user data to retrieve it back, locking it down with the appropriate IAM policies.

Related

Send AWS EC2 metrics to AWS Elasticsearch Service Domain for monitoring in Kibana

I am stuck on one point I have created one EC2 Linux based instance in Aws.
Now I want to send the EC2 metrics data to the managed Elasticsearch domain for monitoring purposes in Kiban, I go through the cloud watch console and check the metric is present of instance but didn't get how to connect with the Elasticsearch domain that I have created.
Can anyone please help me with this situation?
There is no build in mechanism for extraction/streaming of metrics data points in real time. You have to develop a custom solution for that. For example, by having a lambda function which is invoked every minute and which reads data points using get_metric_data. The the lambda would inject the points into your ES.
To invoke a lambda function periodically, e.g. every 1 minute you would have to setup CloudWatch Event rule with schedule Expressions. Lambda function would also need to have permissions granted to interact with CloudWatch metrics.
Welcome to SO :)
An alternative to the solution suggested by Marcin is to install metricbeat on the EC2 Instance and configure the metricbeat config file to send metrics to your Managed AWS ES Domain.
This is pretty simple and you should be able to do this fairly quickly.

AWS ECS service error: Task long arn format must be enabled for launching service tasks with ECS managed tags

I have a ECS service running in a cluster which has 1 task. Upon task update, the service suddenly died with error:
'service my_service_name failed to launch a task with (error Task long arn format must be enabled for launching service tasks with ECS managed tags.)'
Current running tasks are automatically drained and the above message shows up every 6 hours in the "Events" tab of the service. Any changes done to the service config does not repair the issue. Rolling back the task update also doesn't change anything.
I believe I'm already using the long ARN format. Looking for help.
This turned out to be a AWS bug acknowledged by them now. It was supposed to manifest after Jan 1 2020 but appeared early because of a workflow fault in AWS.
The resources were created by an IAM user who was later deleted and hence the issue appears.
I simply removed the following from my task JSON input: propagateTags, enableECSManagedTags
It seems like you are Tagging Your Amazon ECS Resources but you did not opt-in to this feature so you have to opt-in and I thin you are using regular expression in deployment so if your deployment mechanism uses regular expressions to parse the old format ARNs or task IDs, this may be a breaking change.
Starting today you can opt in to a new Amazon Resource Name (ARN) and
resource ID format for Amazon ECS tasks, container instances, and
services. The new format enables the enhanced ability to tag resources
in your cluster, as well as tracking the cost of services and tasks
running in your cluster.
In most cases, you don’t need to change your system beyond opting in
to the new format. However, if your deployment mechanism uses regular
expressions to parse the old format ARNs or task IDs, this may be a
breaking change. It may also be a breaking change if you were storing
the old format ARNs and IDs in a fixed-width database field or data
structure.
After you have opted in, any new ECS services or tasks have the new
ARN and ID format. Existing resources do not receive the new format.
If you decide to opt out, any new resources that you later create then
use the old format.
You can check this AWS compute blog to migrate to new ARN.
migrating-your-amazon-ecs-deployment-to-the-new-arn-and-resource-id-format-2
Tagging Your Amazon ECS Resources
To help you manage your Amazon ECS tasks, services, task definitions,
clusters, and container instances, you can optionally assign your own
metadata to each resource in the form of tags. This topic describes
tags and shows you how to create them.
Important
To use this feature, it requires that you opt-in to the new Amazon
Resource Name (ARN) and resource identifier (ID) formats. For more
information, see Amazon Resource Names (ARNs) and IDs.
ecs-using-tags

How to get AWS ECS Region

I have my microservice running in AWS ECS, and I want to tell which region this service is running in. Do they have a meta data service for me get my microservice region?
There are two ways to do this. The first is to use the Metadata file. This feature is disabled by default so you'll need to turn it on.
Run cat $ECS_CONTAINER_METADATA_FILE on linux after enabling it to see the metadata. The ENV var stores the file location.
The second is to use the HTTP metadata endpoint. There are two potential endpoints here (version 2 and 3) depending on how the instance is launched, so check the docs.
In either case the region is not a specific property of the metadata, but it can be inferred from the ARN.

Amazon EC2 get tag from CLI - no credentials

The metadata URL from Amazon gives a lot of data but lags tag information. I tried to combine a bunch of different commands and eventually got to the describe-tags CLI command. The problem is that while I can get the Instance ID and the Region easily enough, I cannot get values for tags without dropping credentials onto the box.
I get the following error:
Unable to locate credentials. You can configure credentials by running "aws configure".
The basic command I wind up executing is:
aws ec2 describe-tags --region us-east-1 --filters "Name=resource-id,Values=SOME_ID"
The process I follow is this:
Create an instance with a predefined application on it
Image the instance
Spin up various instances using the image via the Amazon AWS API programmatically
Tag the instances that get spun up with pieces of critical data
Attempt to read the tags from the application
Any way to get around the credentials issue? I figure that the local machine would have access to its own tag metadata without signing in but that doesn't appear to be the case.
If there's no way to get around it, are there any suggestions to pass in the data to the VM without sitting around and waiting for it to start up?
I really don't want to write a process that sits around waiting for the EC2 to finish spinning up, SSH in and then pass in the critical data myself. The data changes on-the-fly and can change between instances that I fire up in order to handle various events.
I would create your EC2 instances with IAM roles for EC2. You don't need to do anything fancy and the credentials are then available on the box. It's easy to restrict the role down to do only what you need.

Tag Nodes With Chef Roles Using cloudformation

So my goal is to launch say 100 nodes in the cloud using cloudformation and I would like to tag nodes with chef roles within my cloudformation script instead of using knife. I have setup my cloudformation nodes to automtically register themselves with the chef server and I want to use report their role to the chef server so that chef server installs the proper cookbooks on each node (depending on the node roles). I know this is possible with knife but I want to bury the node role within my cloudformation script.
How can I do so?
I do this with chef. I usually put a json file in S3 which describes the roles the machine needs to use. I create an IAM user in CloudFormation which can access the S3 bucket. Then, in my user data script, I first grab the file from S3 and then run chef-client -j /path/to/json/file. I do the same thing with the validation key, fwiw, so that the node can register itself.
HTH
I use Puppet, which is of course slightly different to Chef, but the same theory should apply. I send a JSON object as the user data when launching a new instance (also via CloudFormation), then access this data in Puppet to do dynamic configuration.
Puppet handles a lot of this automatically - e.g. it will automatically set the FACTOR_EC2_USER_DATA environment variable for me, so I just need to parse the JSON in to variables such as $role and $environment, at which point I can dynamically decide which role the instance should be assigned.
So as long as you can find some way to access the user data within Chef, the same approach should work.