Create Route53 record from command line using aws-cli - amazon-web-services

How can I easily create an Amazon AWS Route53 from command line? It takes too long to click around in the web console.

You need to know the hosted zone id. List your hosted zones:
$ aws route53 list-hosted-zones
The output should be:
{
"HostedZones": [
{
"Id": "/hostedzone/ZFYKW933LX916",
"Name": "example.com.",
"CallerReference": "C4E8C4F3-5265-4248-B324-807A4AB90ABC",
"Config": {
"PrivateZone": false
},
"ResourceRecordSetCount": 39
},
{
"Id": "/hostedzone/Z6JTNNZOHT191",
"Name": "example.net.",
"CallerReference": "A4001EE9-C0FD-F484-9F8D-688F681EFDEF",
"Config": {
"PrivateZone": false
},
"ResourceRecordSetCount": 16
}
]
}
Now you need to create a change batch:
$ aws --profile messa route53 change-resource-record-sets --hosted-zone-id /hostedzone/ZFYKW933LX916 --change-batch '{"Changes": [ { "Action": "UPSERT", "ResourceRecordSet": { "Name": "foobar.example.com", "Type": "A", "TTL": 3600, "ResourceRecords": [{ "Value": "11.222.33.44" }] } } ]}'
The output should be:
{
"ChangeInfo": {
"Id": "/change/C2T36TTVOVS7KX",
"Status": "PENDING",
"SubmittedAt": "2020-02-12T12:54:43.056Z"
}
}

Related

aws-ec2 OVA to AMI import-image error: ClientError: Unable to find an etc directory with fstab

My goal is to convert an OVA into AMI for deploying EC2 instances with. The OVA is a standard SLES 12, SP5 image with proprietary software installed on it. SLES 12 is supported for EC2 Image import: https://docs.aws.amazon.com/vm-import/latest/userguide/prerequisites.html
The OVA is uploaded to S3 bucket. I have closely followed this very well documented wiki on VM Imports on AWS: https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html, which works great, until the conversion fails (error below).
Image import begins:
MAC-LT: ~/aws $ aws ec2 import-image --description "SLES 12 OVA" --disk-containers file://containers.json --region us-east-2
{
"Description": "SLES 12 OVA",
"ImportTaskId": "import-ami-<redacted>",
"Progress": "1",
"SnapshotDetails": [
{
"Description": "SLES 12 OVA",
"DiskImageSize": 0.0,
"Format": "OVA",
"UserBucket": {
"S3Bucket": "<redacted>",
"S3Key": "<redacted>.ova"
}
}
],
"Status": "active",
"StatusMessage": "pending"
}
Conversion starts:
MAC-LT: ~/aws $ aws ec2 describe-import-image-tasks --import-task-ids import-ami-<redacted> --region <redacted>
{
"ImportImageTasks": [
{
"Description": "SLES 12 OVA",
"ImportTaskId": "import-ami-<redacted>",
"Progress": "19",
"SnapshotDetails": [
{
"DiskImageSize": 5130945536.0,
"Format": "VMDK",
"Status": "active",
"UserBucket": {
"S3Bucket": "<redacted>",
"S3Key": "<redacted>.ova"
}
}
],
"Status": "active",
"StatusMessage": "converting",
"Tags": []
}
]
}
But eventually fails after ~30mins with:
MAC-LT: ~/aws $ aws ec2 describe-import-image-tasks --import-task-ids import-ami-<redacted> --region <redacted>
{
"ImportImageTasks": [
{
"Description": "SLES 12 OVA",
"ImportTaskId": "import-ami-<redacted>",
"SnapshotDetails": [
{
"DeviceName": "/dev/sde",
"DiskImageSize": 5130945536.0,
"Format": "VMDK",
"Status": "completed",
"UserBucket": {
"S3Bucket": "<redacted>",
"S3Key": "<redacted>.ova"
}
}
],
"Status": "deleted",
"StatusMessage": "ClientError: Unable to find an etc directory with fstab.",
"Tags": []
}
]
}
Has anyone seen this? Are there any customizations to the OVA that are required to make this work?

AWS Certificate Manager Pending Validation when DNS validation is successful

Resolved! - Ended up just needing to contact Amazon Support to push it through.
I'm attempting to renew a certificate created in AWS Certificate Manager (ACM), but I'm stuck in the dreadful PENDING_VALIDATION status; this is a DNS validated certificate where I validated using the CNAME record.
Under domains I can see the domain validation has a status of Success and Renewal Status of Success
If I run aws acm describe-certificate --certificate-arn "examplearn", I get a return showing DomainValidationOptions with the ValidationStatus being success for the CNAME validation.
Replaced with "example" for sensitive values
{
"Certificate": {
"CertificateArn": "arn:aws:acm:us-east-1:example:certificate/certid",
"DomainName": "*.example.com",
"SubjectAlternativeNames": [
"*.example.com"
],
"DomainValidationOptions": [
{
"DomainName": "*.example.com",
"ValidationDomain": "*.example.com",
"ValidationStatus": "SUCCESS",
"ResourceRecord": {
"Name": "examplename",
"Type": "CNAME",
"Value": "examplevalue"
},
"ValidationMethod": "DNS"
}
],
"Serial": "",
"Subject": "CN=*.example.com",
"Issuer": "Amazon",
"CreatedAt": "2019-01-17T12:53:01-08:00",
"IssuedAt": "2021-10-22T21:21:50.177000-07:00",
"Status": "ISSUED",
"NotBefore": "2021-10-22T17:00:00-07:00",
"NotAfter": "2022-11-23T15:59:59-08:00",
"KeyAlgorithm": "RSA-2048",
"SignatureAlgorithm": "SHA256WITHRSA",
"InUseBy": [
"example",
"example",
"example",
"example"
],
"Type": "AMAZON_ISSUED",
"RenewalSummary": {
"RenewalStatus": "PENDING_VALIDATION",
"DomainValidationOptions": [
{
"DomainName": "*.example.com",
"ValidationDomain": "*.example.com",
"ValidationStatus": "SUCCESS",
"ResourceRecord": {
"Name": "examplename",
"Type": "CNAME",
"Value": "examplevalue"
},
"ValidationMethod": "DNS"
}
],
"UpdatedAt": "2022-09-21T23:39:15.161000-07:00"
},
"KeyUsages": [
{
"Name": "DIGITAL_SIGNATURE"
},
{
"Name": "KEY_ENCIPHERMENT"
}
],
"ExtendedKeyUsages": [
{
"Name": "TLS_WEB_SERVER_AUTHENTICATION",
"OID": "1.3.6.1.5.5.7.3.1"
},
{
"Name": "TLS_WEB_CLIENT_AUTHENTICATION",
"OID": "1.3.6.1.5.5.7.3.2"
}
],
"RenewalEligibility": "ELIGIBLE",
"Options": {
"CertificateTransparencyLoggingPreference": "ENABLED"
}
}
}
Followed instructions successfully in https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-pending-validation/ (checking cname response exactly matches what is in acm CNAME values when copy pasting)
The site domain registration is in Route 53 with NS pointing to cloudflare, where DNS is managed.
Is there something obvious that pops out to you? Thank you!

Trying to get the route table ID in AWS cli using the tag name as a filter

How can I get the AWS route table ID using a tag name as the filter?
The tag name I want to look for is - eksctl-live-cluster/PublicRouteTable
In the below example, the end result is that I would want to get the command to return the id of "rtb-0b6d5359a281c6fd9"
Using the below command I can get all the info for all the route tables in my VPC. I have tried adding tags and names in the query part unsuccessfully and played around with --filter. I just want to get the ID for one table that uses the name "eksctl-live-cluster/PublicRouteTable".
aws ec2 describe-route-tables --filters "Name=vpc-id,Values=vpc-0a75516801dc9a130" --query "RouteTables[]"
The name I want to use in the search is - eksctl-live-cluster/PublicRouteTable
Here is the output of all the route tables when i use the first command -
[
{
"Associations": [
{
"AssociationState": {
"State": "associated"
},
"RouteTableAssociationId": "rtbassoc-07ef991c747ba58a5",
"Main": true,
"RouteTableId": "rtb-0ad0dde171cc946c9"
}
],
"RouteTableId": "rtb-0ad0dde171cc946c9",
"VpcId": "vpc-0a75516801dc9a130",
"PropagatingVgws": [],
"Tags": [],
"Routes": [
{
"GatewayId": "local",
"DestinationCidrBlock": "10.170.0.0/16",
"State": "active",
"Origin": "CreateRouteTable"
}
],
"OwnerId": "000000000"
},
{
"Associations": [
{
"SubnetId": "subnet-0e079eb96b85fc72c",
"AssociationState": {
"State": "associated"
},
"RouteTableAssociationId": "rtbassoc-062f19d9175f4f596",
"Main": false,
"RouteTableId": "rtb-0b6d5359a281c6fd9"
},
{
"SubnetId": "subnet-0b1fae931da8c9d8f",
"AssociationState": {
"State": "associated"
},
"RouteTableAssociationId": "rtbassoc-0a22d395d0b6196ac",
"Main": false,
"RouteTableId": "rtb-0b6d5359a281c6fd9"
}
],
"RouteTableId": "rtb-0b6d5359a281c6fd9",
"VpcId": "vpc-0a75516801dc9a130",
"PropagatingVgws": [],
"Tags": [
{
"Value": "live",
"Key": "eksctl.cluster.k8s.io/v1alpha1/cluster-name"
},
{
"Value": "live",
"Key": "alpha.eksctl.io/cluster-name"
},
{
"Value": "0.29.2",
"Key": "alpha.eksctl.io/eksctl-version"
},
{
"Value": "PublicRouteTable",
"Key": "aws:cloudformation:logical-id"
},
{
"Value": "eksctl-live-cluster",
"Key": "aws:cloudformation:stack-name"
},
{
"Value": "eksctl-live-cluster/PublicRouteTable",
"Key": "Name"
},
{
"Value": "arn:aws:cloudformation:us-east-1:000000000:stack/eksctl-live-cluster/ef543610-3981-11eb-abcc-0af655d000e7",
"Key": "aws:cloudformation:stack-id"
}
],
"Routes": [
{
"GatewayId": "local",
"DestinationCidrBlock": "10.170.0.0/16",
"State": "active",
"Origin": "CreateRouteTable"
},
{
"GatewayId": "igw-072414b2b1d313970",
"DestinationCidrBlock": "0.0.0.0/0",
"State": "active",
"Origin": "CreateRoute"
}
],
"OwnerId": "996762160"
}
]
This should return what you're looking for:
aws ec2 describe-route-tables --filters 'Name=tag:Name,Values=eksctl-live-cluster/PublicRouteTable' --query 'RouteTables[].Associations[].RouteTableId'
In general you can filter with tags using the tag:<tag name> construct. I'm not sure what a / value will do.
tag :key- The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner and the value TeamA , specify tag:Owner for the filter name and TeamA for the filter value.
If you want to further filter it by VPC id, you can add on to the filter like this:
aws ec2 describe-route-tables --filters 'Name=tag:Name,Values=eksctl-live-cluster/PublicRouteTable' Name=vpc-id,Values=<VPC ID> --query 'RouteTables[].Associations[].RouteTableId'
This line:
aws ec2 --profile prod --region eu-west-1 describe-route-tables --filters Name=tag:Name,Values=private-route-table-eu-west-1b --query 'RouteTables[].Associations[].RouteTableId'
Returned the following because my route table is associated with two subnets:
[
"rtb-04d4b860",
"rtb-04d4b860"
]
If you need a unique output you could pipe this all through jq :
|jq -r .[] |sort |uniq
References
describe-route-tables

"NLB ARN is malformed" when create VPC link for AWS APIGateway

I followed the tutorial to create a VPC link to my private elb balancer.
https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-api-with-vpclink-cli.html
But it failed, and got an error message "statusMessage": "NLB ARN is malformed".
I do find the ELB with same ARN by elbv2 cli, so the ARN must be a legal one...
I can't find document to solve the problem.
anyone can help me? thank you.
what i did is as following.
$ aws elbv2 describe-load-balancers --load-balancer-arns arn:aws:elasticloadbalancing:ap-northeast-1:846239845603:loadbalancer/app/v2-api-balancer/db49ab0ecaef1de8
{
"LoadBalancers": [
{
"Scheme": "internal",
"SecurityGroups": [
"sg-9282b8f4"
],
"LoadBalancerArn": "arn:aws:elasticloadbalancing:ap-northeast-1:846239845603:loadbalancer/app/v2-api-balancer/db49ab0ecaef1de8",
"State": {
"Code": "active"
},
"CreatedTime": "2017-10-18T04:27:28.780Z",
"VpcId": "vpc-dbe3f2be",
"DNSName": "internal-v2-api-balancer-988454399.ap-northeast-1.elb.amazonaws.com",
"AvailabilityZones": [
{
"SubnetId": "subnet-7642062e",
"ZoneName": "ap-northeast-1c"
},
{
"SubnetId": "subnet-c454fa8d",
"ZoneName": "ap-northeast-1b"
}
],
"IpAddressType": "ipv4",
"Type": "application",
"LoadBalancerName": "v2-api-balancer",
"CanonicalHostedZoneId": "Z14GRHDCWA56QT"
}
]
}
$ aws apigateway create-vpc-link \
--name my-test-vpc-link-1 \
--target-arns "arn:aws:elasticloadbalancing:ap-northeast-1:846239845603:loadbalancer/app/v2-api-balancer/db49ab0ecaef1de8"
{
"name": "my-test-vpc-link-1",
"targetArns": [
"arn:aws:elasticloadbalancing:ap-northeast-1:846239845603:loadbalancer/app/v2-api-balancer/db49ab0ecaef1de8"
],
"id": "7eexgn",
"status": "PENDING"
}
$ aws apigateway get-vpc-link --vpc-link-id 7eexgn
{
"id": "7eexgn",
"targetArns": [
"arn:aws:elasticloadbalancing:ap-northeast-1:846239845603:loadbalancer/app/v2-api-balancer/db49ab0ecaef1de8"
],
"status": "FAILED",
"name": "my-test-vpc-link-1",
"statusMessage": "NLB ARN is malformed"
}
VPC Links must be to a network LB. Looks like you are trying to use an application LB.
https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-nlb-for-vpclink-using-console.html

Monitoring memory usage in AWS CloudWatch for Windows instance

By default, memory usage isn’t monitored by CloudWatch. So I tried to add it to my Windows instance in AWS using these instructions.
This is what I did:
I created a user named custom-metrics-user. Then I stored the access and secret key.
I created and attached an Inline Policy to the user. it looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["cloudwatch:PutMetricData", "cloudwatch:GetMetricStatistics", "cloudwatch:ListMetrics", "ec2:DescribeTags"],
"Resource": "*"
}
]
}
I launched a Windows Instance [2012 R2 Base AMI]. After accessing the instance through RDP, I found that the AWS.EC2.Windows.CloudWatch.json file is already present.
I changed that .json file accordingly. After changing it, it looks like this:
{
"EngineConfiguration": {
"PollInterval": "00:00:15",
"Components": [
{
"Id": "ApplicationEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Application",
"Levels": "1"
}
},
{
"Id": "SystemEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "System",
"Levels": "7"
}
},
{
"Id": "SecurityEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Security",
"Levels": "7"
}
},
{
"Id": "ETW",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Microsoft-Windows-WinINet/Analytic",
"Levels": "7"
}
},
{
"Id": "IISLog",
"FullName": "AWS.EC2.Windows.CloudWatch.IisLog.IisLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogDirectoryPath": "C:\\inetpub\\logs\\LogFiles\\W3SVC1"
}
},
{
"Id": "CustomLogs",
"FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogDirectoryPath": "C:\\CustomLogs\\",
"TimestampFormat": "MM/dd/yyyy HH:mm:ss",
"Encoding": "UTF-8",
"Filter": "",
"CultureName": "en-US",
"TimeZoneKind": "Local"
}
},
{
"Id": "PerformanceCounter",
"FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"CategoryName": "Memory",
"CounterName": "Available MBytes",
"InstanceName": "",
"MetricName": "Memory",
"Unit": "Megabytes",
"DimensionName": "InstanceId",
"DimensionValue": "{instance_id}"
}
},
{
"Id": "CloudWatchLogs",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"AccessKey": "",
"SecretKey": "",
"Region": "us-east-1",
"LogGroup": "Default-Log-Group",
"LogStream": "{instance_id}"
}
},
{
"Id": "CloudWatch",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters":
{
"AccessKey": "AKIAIK2U6EU675354BQ",
"SecretKey": "nPyk9ntdwW0y5oaw8353fsdfTi0e5/imx5Q09vz",
"Region": "us-east-1",
"NameSpace": "System/Windows"
}
}
],
"Flows": {
"Flows":
[
"PerformanceCounter,CloudWatch"
]
}
}
}
I enabled CloudWatch Logs integration under EC2ConfigSettings.
I restarted the EC2Config Service.
I got no errors but the Memory metric isn't being shown in the Cloud Watch console. The blog says to wait for 10-15 minutes for the metric to appear, but it has already been an hour since I have done it. What’s going wrong?
First, you need to add an IAM role to your instance:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToSSM",
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
}
]
}
Note that you cannot add a role to an existing instance. So do it before launching.
Then you need to configure the EC2Config file (normally) accessible via the following path:
C:\Program Files\Amazon\Ec2ConfigService\Settings.AWS.EC2.Windows.CloudWatch.json
You should add the following block to the JSON file:
...
{
"Id": "PerformanceCounter",
"FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"CategoryName": "Memory",
"CounterName": "Available MBytes",
"InstanceName": "",
"MetricName": "Memory",
"Unit": "Megabytes",
"DimensionName": "InstanceId",
"DimensionValue": "{instance_id}"
}
}
...
{
"Id": "CloudWatch",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters":
{
"AccessKey": "",
"SecretKey": "",
"Region": "eu-west-1",
"NameSpace": "PerformanceMonitor"
}
}
Do not forget to restart the EC2Config service on your server after changing the config file. You should be able to get the memory metrics after a couple of minutes in your CloudWatch console.
The level of CloudWatch monitoring on your instance should also be set to detailed:
Update:
According to the documentation, you can now attach or modify an IAM role to your existing instance.
I am running a Windows 2012 Base R2 Server and it is running EC2Config Version greater than 4.0. If anyone faces the same problem, please restart the Amazon SSM Agent Service after restarting EC2Config Service.
I read it in the following link [STEP-6] :
http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/send_logs_to_cwl.html
It reads the following :
If you are running EC2Config version 4.0 or later, then you must restart the SSM Agent on the instance from the Microsoft Services snap-in.
I solved my issue by doing this.