I am creating a beanstalk using the CDK and after than I would like to create the cloudwatch dashboard on the same the stalk to view the beanstalk metrics. How can I get the beanstalk loadbalancer details using CfnEnvironment?
const dashboardBody = {
widgets:[
{
"type": "metric",
"width": 24,
"height": 3,
"properties": {
"title": "Request Count",
"metrics": [
[
"AWS/ApplicationELB",
"RequestCount",
"LoadBalancer",
***"<<Loadbalancer Name>>"***
]
],
"region": "<<AWS Region>>",
"stat": "Sum",
"yAxis": {
"left": {
"min": 0
}
}
}
}
]
}
const dashobard = new cdk.aws_cloudwatch.CfnDashboard(this, 'dashboard', {
dashboardName: 'dashboard',
dashboardBody: JSON.stringify(dashboardBody)
})
Related
I'm creating a CloudFormation template to deploy an autoscaling group that should only use spot instances. The Cloudformation throws an error with this template. What's wrong here?
Error:
CREATE_FAILED Encountered unsupported property InstancesDistribution
{
"Resources": {
"testasg": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"LaunchTemplate": {
"LaunchTemplateId": "lt-0c8090cd4510eb25e",
"Version": "1"
},
"MaxSize": "10",
"MinSize": "2",
"DesiredCapacity": "2",
"VPCZoneIdentifier": [
"subnet1",
"subnet2"
],
"MaxInstanceLifetime": 86400,
"InstancesDistribution": {
"OnDemandAllocationStrategy": "lowest-price",
"OnDemandBaseCapacity": 0,
"OnDemandPercentageAboveBaseCapacity": 0,
"SpotAllocationStrategy": "lowest-price",
"SpotInstancePools": 2
},
"NewInstancesProtectedFromScaleIn": false,
"TerminationPolicies": [
"OldestInstance"
],
"Tags": [
{
"Key": "Cluster",
"Value": "Production",
"PropagateAtLaunch": "true"
},
]
}
}
}
}
InstancesDistribution should be inside MixedInstancesPolicy block, which you do not have.
I created terraform file to create dashboard in AWS cloudwatch.
here is my sample file to create dashboard
// provider module
provider "aws"{
access_key = var.access_key
secret_key = var.secret_key
region = var.region
}
// cloudwatch dashboard module
resource "aws_cloudwatch_dashboard" "main" {
dashboard_name = var.dashboard_name
dashboard_body = <<EOF
{
"widgets": [
{
"type": "metric",
"x": 0,
"y": 0,
"width": 12,
"height": 6,
"properties": {
"metrics": [
[
"AWS/EBS",
"VolumeReadOps",
"VolumeId",
"vol-04b26d88efe8ecd54"
]
],
"period": 300,
"stat": "Average",
"region": "ap-south-1",
"title": "VolumeRead"
}
},
{
"type": "metric",
"x": 0,
"y": 0,
"width": 12,
"height": 6,
"properties": {
"metrics": [
[
"AWS/EBS",
"VolumeQueueLength",
"VolumeId",
"vol-04b26d88efe8ecd54"
]
],
"period": 300,
"stat": "Average",
"region": "ap-south-1",
"title": "VolumeQueueLength"
}
},
{
"type": "metric",
"x": 14,
"y": 13,
"width": 12,
"height": 6,
"properties": {
"metrics": [
[
"AWS/EBS",
"BurstBalance",
"VolumeId",
"vol-04b26d88efe8ecd54"
]
],
"period": 300,
"stat": "Average",
"region": "ap-south-1",
"title": "BurstBalance"
}
}
]
}
EOF
}
Is there any possibility to add all the metrics available in the particular resource (eg:ec2, ebs, rds) in same widget or separate widget for each all the metrics in one function without indicating every metrics in separate function in terraform file?
In aws console just tick the checkbox above all the metrics inside the resource will include all the metrics in same widget. But i couldnt find the proper answer for terraform provision
I am trying to add auto-scaling to multiple Dynamodb tables, since all the tables would have the same pattern for the auto-scaling configuration.
I can of course create scalableTarget again and again but it’s repetitive.
I was wondering if it is possible to re-use the scalable targets
"DDBTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "Banner_v1",
...
}
},
"WriteCapacityScalableTarget": {
"Type": "AWS::ApplicationAutoScaling::ScalableTarget",
"Properties": {
"MaxCapacity": 15,
"MinCapacity": 5,
"ResourceId": {
"Fn::Join": [
"/",
[
"table",
{
"Ref": "DDBTable"
}
]
]
},
"RoleARN": {
"Fn::ImportValue" : "ScalingRoleArn"
},
"ScalableDimension": "dynamodb:table:WriteCapacityUnits",
"ServiceNamespace": "dynamodb"
}
},
"WriteScalingPolicy": {
"Type": "AWS::ApplicationAutoScaling::ScalingPolicy",
"Properties": {
"PolicyName": "WriteAutoScalingPolicy",
"PolicyType": "TargetTrackingScaling",
"ScalingTargetId": {
"Ref": "WriteCapacityScalableTarget"
},
"TargetTrackingScalingPolicyConfiguration": {
"TargetValue": 50,
"ScaleInCooldown": 60,
"ScaleOutCooldown": 60,
"PredefinedMetricSpecification": {
"PredefinedMetricType": "DynamoDBWriteCapacityUtilization"
}
}
}
},
Tried this but not working.
"WriteCapacityScalableTarget": {
"Type": "AWS::ApplicationAutoScaling::ScalableTarget",
"Properties": {
"ResourceId": {
...
"Fn::Join": [
"/",
[
"table",
{
"Ref": "DDBTable"
},
"table",
{
"Ref": "AnotherTable2"
}
]
]
},
...
}
I can of course create scalableTarget again and again but it’s repetitive
Using plain CFN, this is how you have to do it. The reason is that ResourceId is just a string, rather then a list of strings.
The only other alternative in CFN would be to use custom resource or macros. In both cases you would have to develop some lambda function to do automatically the repetitive stuff.
I am launching an EMR cluster using the step function. I need to change the timezone of the cluster from UTC to IST. While launching the cluster through AWS console, we can specify the configurations using the json format. But in the case of step functions, it is unclear that which parameter would be required to specify configurations like time zone, hadoop heapsize etc.
The basic code to launch the EMR cluster that I am using is as follows -
{
"StartAt": "Create an EMR cluster",
"States": {
"Create an EMR cluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
"Parameters": {
"Name": "Sample_Cluster",
"VisibleToAllUsers": true,
"ReleaseLabel": "emr-5.26.0",
"Applications": [
{ "Name": "Hive" },
{
"Name": "Hadoop"
}
],
"Tags": [
{
"Key": "Name",
"Value": "Testcluster_emr"
}
],
"Instances": {
"KeepJobFlowAliveWhenNoSteps": true,
"InstanceFleets": [
{
"Name": "MyMasterFleet",
"InstanceFleetType": "MASTER",
"TargetOnDemandCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m5.xlarge"
}
]
},
{
"Name": "MyCoreFleet",
"InstanceFleetType": "CORE",
"TargetOnDemandCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m5.xlarge"
}
]
}
],
}
},
"ResultPath": null,
"End": true
}
}
}
I have a kubernetes cluster in AWS with ec2 worker nodes in the following AZs along with corresponding PersistentVolumes in each AZ.
us-west-2a
us-west-2b
us-west-2c
us-west-2d
My problem is I want to create a Deployment with a volume mount that references a PersistentVolumeClaim and guarantee they land in the same AZ because right now it is luck whether both the Deployment and PersistentVolumeClaim end up in the same AZ. If they don't land in the same AZ then the deployment fails to find the volume mount.
I create 4 PersistentVolumes by manually creates EBS volumes in each AZ and copying the ID to the spec.
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "pv-2"
},
"spec": {
"capacity": {
"storage": "1Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Retain",
"awsElasticBlockStore": {
"volumeID": "vol-053f78f0c16e5f20e",
"fsType": "ext4"
}
}
}
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "mydata",
"namespace": "staging"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "10Mi"
}
}
}
}
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "myapp",
"namespace": "default",
"labels": {
"app": "myapp"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "myapp"
}
},
"template": {
"metadata": {
"labels": {
"app": "myapp"
}
},
"spec": {
"containers": [
{
"name": "hello",
"image": "centos:7",
"volumeMounts": [ {
"name":"mydata",
"mountPath":"/etc/data/"
} ]
}
],
"volumes": [ {
"name":"mydata",
"persistentVolumeClaim":{
"claimName":"mydata"
}
}]
}
}
}
}
You could try setting annotation for region and AvailabilityZone as mentioned in here and here