How to get brokers endpoint of Amazon MSK as an output - amazon-web-services

we have an AWS cloudformation template, through which we are creating the Amazon MSK(Kafka) cluster. which is working fine.
Now we have multiple applications in our product stack which consume the Brokers endpoints which is created by the Amazon MSK. now to automate the product deployment we decided to create a Route53 recordset for the MSK broker endpoints. we are having hard time finding how we can get a broker endpoints of MSK cluster as an Outputs in AWS Cloudformation templates.
looking forward for suggestion/guidance on this.

Following on #joinEffort answer, this how i did it using custom resources as the CFN resource for an MKS::Cluster does not expose the broker URL:
(option 2 is using boto3 and calling AWS API
The description of the classes and mothods to use from CDK custom resource code can be found here:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Kafka.html#getBootstrapBrokers-property
Option 1: Using custom resource:
def get_bootstrap_servers(self):
create_params = {
"ClusterArn": self._platform_msk_cluster_arn
}
get_bootstrap_brokers = custom_resources.AwsSdkCall(
service='Kafka',
action='getBootstrapBrokers',
region='ap-southeast-2',
physical_resource_id=custom_resources.PhysicalResourceId.of(f'connector-{self._environment_name}'),
parameters = create_params
)
create_update_custom_plugin = custom_resources.AwsCustomResource(self,
'getBootstrapBrokers',
on_create=get_bootstrap_brokers,
on_update=get_bootstrap_brokers,
policy=custom_resources.AwsCustomResourcePolicy.from_sdk_calls(resources=custom_resources.AwsCustomResourcePolicy.ANY_RESOURCE)
)
return create_update_custom_plugin.get_response_field('BootstrapBrokerString')
Option 2: Using boto3:
client = boto3.client('kafka', region_name='ap-southeast-2')
response = client.get_bootstrap_brokers(
ClusterArn='xxx')
#From here u can get the broker urls:
json_response = json.loads(json.dumps(response))
Reff: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kafka.html#Kafka.Client.get_bootstrap_brokers

You should be able to get it from below command. More can be found here.
aws kafka get-bootstrap-brokers --region us-east-1 --cluster-arn ClusterArn

Related

Deploying a CDK Script through Service Catalog gives Authorization failures

I've created an AWS CDK script which deploys an ECR image to Fargate.
When executing the script from an EC2 VM (using cdk deploy using the aws cli tool), I can add an IAM Role to the EC2 instance therefore granting all the permissions required. And the script deploys successfully.
However my aim is to cdk synth the script into a Cloudformation template manually, and then deploy from AWS Service Catalog.
This is where permissions are required, but I'm unsure where exactly to add them?
An example error I get is:
"API: ec2:allocateAddress You are not authorized to perform this operation. Encoded authorization failure message: "
I've looked into the aws cdk docs (https://docs.aws.amazon.com/cdk/api/v1/docs/aws-iam-readme.html) thinking the cdk script needs to have the permissions embedded, however the resources I'm trying to create don't seem to have options to add IAM permissions.
Another option is, like with native Cloudformation scripts, to add Parameters which allow attaching Roles upon provisioning the Product, though I haven't found a way to implement this in cdk either.
It seems like a very obvious solution would be available for this, but I've not found it! Any ideas?
The CDK script used:
from constructs import Construct
from aws_cdk import (
aws_ecs as ecs,
aws_ec2 as ec2,
aws_ecr as ecr,
aws_ecs_patterns as ecs_patterns
)
class MyConstruct(Construct):
def __init__(self, scope: Construct, id: str, *, repository_name="my-repo"):
super().__init__(scope, id)
vpc = ec2.Vpc(self, "my-vpc", max_azs = 3)
cluster = ecs.Cluster(self, "my-ecs-cluster", vpc=vpc)
repository = ecr.Repository.from_repository_name(self, "my-ecr-repo", repository_name)
image = ecs.ContainerImage.from_ecr_repository(repository=repository)
fargate_service = ecs_patterns.ApplicationLoadBalancedFargateService(
self,
"my-fargate-instance",
cluster=cluster,
cpu=256,
desired_count=1,
task_image_options=ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
image=image,
container_port=3000,
),
memory_limit_mib=512,
public_load_balancer=True
)

Unable to find Minimum/Maximum task value for ECS

I am working with aws boto3 lib and trying to retrieve certain values.
I first retrieve all cluster list, then fetch specific services, then call describe-service for them.
But I am unable to retrieve two fields Minimum tasks and Maximum tasks for services which get displayed on AWS ECS console page under Auto Scaling tab.
Anybody has any idea how to get these values from?
The ECS console hides this fact, but those are actually in the Application AutoScaling configuration, not the ECS configuration. I believe you would need to call describe_scalable_targets in ApplicationAutoScaling to get those values.
Thanks Mark B for help.
You are right and I understand that aws ecs service has to register with autoscaling service which is a separate service. I am providing sample cli and python code to retrieve these values for other now.
aws ecs describe-services --cluster MAGIC-Bonus-Wrappers --services service-name
aws application-autoscaling describe-scalable-targets --service-namespace ecs --resource-ids service/cluster-name/service-name
Python Code:
client = session.client('application-autoscaling')
response = client.describe_scalable_targets(
ServiceNamespace='ecs',
ResourceIds=[serviceId])
def_val = -1, -1
if "ScalableTargets" in response and len(response['ScalableTargets']) > 0 :
target = response['ScalableTargets'][0]
if 'MinCapacity' in target and 'MaxCapacity' in target:
return target['MinCapacity'], target['MaxCapacity']
else:
return def_val

How to register RDS instance with CloudMap

I know this is possible through the AWS CLI and Console as I have done it like this but I would now need to do it in Terraform. I would like to execute the equivalent of the CLI command as aws servicediscovery register-instance.
Pointing to any documentation or examples that can be shared would be most beneficial and appreciated.
This is now possible using the aws_service_discovery_instance resource as of version v3.57.0 of the AWS provider.
resource "aws_service_discovery_instance" "example" {
instance_id = "mydb"
service_id = aws_service_discovery_service.example.id
attributes = {
AWS_INSTANCE_CNAME = aws_db_instance.example.address
}
}
Adding instances to the discovery service is not yet supported:
Add an aws_service_discovery_instance resource
But pull requests has already been preprared for that, so hopefully soon:
resource/aws_service_discovery_instance: new implementation

How to fetch AmazonMQ nodes for RabbitMQ brokers using API, CLI or Terraform

I'm trying to create AWS Cloudwatch alarm for systemCpuUtilizaiton of each RabbitMQ broker
nodes via Terraform. To create the AWS Cloudwatch alarm, I need to provide dimensions (node-name and broker) as mentioned in AWS docs.
Hence, I'm looking to fetch the rabbitMQ broker node-names from AWS (via CLI, or API or Terraform)
Please note: I'm able to see the matrices of each broker nodes in AWS Cloudwatch console, but not from API, SDK or CLI.
I went through the below links but didn't get anything handy https://awscli.amazonaws.com/v2/documentation/api/latest/reference/mq/index.html#cli-aws-mq
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/mq_broker
Please let me know in case I'm missing something.
Recently, AWS started publishing CPU/Mem/Disk metrics per Broker.
You should see these metrics under AmazonMQ/Broker metrics. You can now use the SystemCpuUtilization metric without a node name dimension and then take the Maximum statistic to get the most overloaded node. You can create a CloudWatch alarm based on this metric.
The AWS MQ node names used for the cloudwatch dimensions do not appear to be exposed through the AWS API, but the node name is predictable knowing the IP address. I believe this can be used to construct valid node names for alarms.
data "aws_region" "current" {}
resource "aws_mq_broker" "example" {
...
}
resource "aws_cloudwatch_metric_alarm" "bat" {
for_each = toset([
for instance in aws_mq_broker.example.instances : instance.ip_address
])
alarm_name = "terraform-test-foobar5"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "SystemCpuUtilization"
namespace = "AWS/AmazonMQ"
period = "120"
statistic = "Average"
threshold = "80"
dimensions = {
Broker = aws_mq_broker.example.name
Node = "rabbitmq#ip-${replace(each.value, ".", "-")}.${data.aws_region.current.name}.compute.internal"
}
}
I have raised the above-mentioned problem to AWS support, below is the solution:
First of all response from AWS team, AmazonMQ-RabbitMQ broker nodes are managed internally by AWS and currently its not exposed via API or SDK.
As a result there is NO way to fetch the Rabbit MQ broker node name via API or SDK. Hence its not possible to directly create cloudwatch alarm on Rabbit MQ broker node's systemCpuUtilizaiton, as node name are required dimensions for creating the alert.
There are two alternative solutions
Query RabbitMQ API to fetch the node-name
Use prometheus/cloudwatch-exporter, to fetch the matrices details from cloud watch where node names are available.
I have used the second method, below values file to fetch the matrices we are interested
prometheus-cloudwatch-exporter:
namespace: monitoring
enabled: true
override:
metrics:
alb: false
rds: false
# ... based on requirement
alerts:
ec2: false # based on requirement
additionalMetrics: |-
# below configuration will fetch the martics,
# containing Rabbit MQ broker node names
- aws_namespace: AWS/AmazonMQ
aws_metric_name: SystemCpuUtilization
aws_dimensions: [Broker, Node]
aws_statistics: [Average]
If everything is configured correctly, you should be able to aws_amazonmq_system_cpu_utilization_average martic in prometheus as shown below. Now use Prometheus Alert manager to create alerts on top of this matrics.

boto3 DMS enable CloudWatch logs

I am writing scripts in Python that are creating DMS tasks using the boto3 package. I wonder if there is any way of programatically enabling CloudWatch logging for the tasks? I can't find any option to do this with the create_replication_task function.
You can achieve this by defining ReplicationTaskSettings in your create_replication_task call. That is an optional parameter. You define the task settings in a JSON string format. You need to add the following in your task settings:
"Logging": {
"EnableLogging": true
}
In that way, you can enable CloudWatch logging while creating the task from Python using Boto3.
A sample request would be as follows:
import boto3
client = boto3.client('dms')
response = client.create_replication_task(
ReplicationTaskIdentifier='string',
SourceEndpointArn='string',
TargetEndpointArn='string',
ReplicationInstanceArn='string',
MigrationType='full-load'|'cdc'|'full-load-and-cdc',
TableMappings='string',
ReplicationTaskSettings="{\"Logging\": {\"EnableLogging\": true}}",
)
Reference to create_replication_task API is here:
AWS SDK for Python - Boto3 - AWS DMS - Create Replication Task API
Reference to ReplicationTaskSettings parameter is here:
AWS SDK for Python - Boto3 - AWS DMS - Create Replication Task API - Replication Task Settings