Is there an AWS API to hit and get the availability zone of the current machine where the code is running? I tried to check if such ENV variable is set automatically by AWS but couldn't find such a thing. Any help would be appreciated.
You can get the current AZ of the instance using Instance metadata.
For example:
AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
echo ${AZ}
will output (example):
us-east-1e
Related
To determine how old the aws ec2 instances are, and to destroy those nodes if they are older than 90 days, I need to check the creation date of the node.
What is the correct way to do that using COMMAND LINE?
What I tried?
I tried the command ec2metadata, but the output doesn't contain creation date.
You can get this information with
aws configservice get-resource-config-history --resource-type AWS::EC2::Instance --resource-id i-xxxxxxxx
The last element in this json is what you need.
You can check get-resource-config-history and find resources between specific dates.
You can also get creation date of your root volume with
aws ec2 describe-volumes --volume-ids vol-xxxxx
This can also give you age of your root volume (if not changed).
I really need to know about the stopped time of AWS EC2 instances. I have checked with AWS cloudtrail, but its not easy to find the exact stopped EC2 instance. Is possible to see exact time of stopped EC2 instances by aws-cli commands or any boto3 script?
You can get this info from StateTransitionReason in describe-instances AWS CLI when you search for stopped instances:
aws ec2 describe-instances --filter Name=instance-state-name,Values=stopped --query 'Reservations[].Instances[*].StateTransitionReason' --output text
Example output:
User initiated (2020-12-03 07:16:35 GMT)
AWS Config keeps track of the state of resources as they change over time.
From What Is AWS Config? - AWS Config:
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
Thus, you could look back through the configuration history of the Amazon EC2 instance and extract times for when the instance changed to a Stopped state.
Sometimes time is missing from StateTransitionReason, you can use CloudTrail and search for Resource Name = instance ID to find out StopInstance(s) API calls.
By default you can track back 90 days, or indefinitely if you create your own trail.
I'm about to deploy Docker container on AWS with credential file formatted like this:
[default]
aws_access_key_id = KEY
aws_secret_access_key = KEY
region=eu-west-2
vpc-id=vpc-bb1b7fd3
and located in ~/.aws/credentials
When I execute command docker-machine create --driver amazonec2 app
I get:
Couldn't determine your account Default VPC ID : "AuthFailure: AWS was not able to validate the provided access credentials\n\tstatus code: 401, request id: faf606d9-b12e-4a9e-a6c5-18eb609ffc45"
Error setting machine configuration from flags provided: amazonec2 driver requires either the --amazonec2-subnet-id or --amazonec2-vpc-id option or an AWS Account with a default vpc-id
Default VPC-ID is already defined. Anyone can help to resolve this or point me in the right direction ?
Command I'm using
docker-machine create --driver amazonec2 --amazonec2-access-key AKIAyyy --amazonec2-secret-key AKIAxxx --amazonec2-region eu-west-2 --amazonec2-vpc-id vpc-bb1b7fd3 flask_app
and when I'm trying to use credentials file located in my file system:
docker-machine create --driver amazonec2 flask_app
where vpc-bb1b7fd3 was generated by AWS by default hence must be valid and time is correct too. I also tried to swap the keys in case I somehow managed to swap them but they're OK too. Output from sudo ntpdate ntp.ubuntu.com was identical with machine's system time.
Error says: Error with pre-create check: "AuthFailure: AWS was not able to validate the provided access credentials\n\tstatus code: 401, request id: 9d642d91-cd93-4104-b9fb-2a42b1249e3b"
Tried:
On Stack Exchange was very similar problem solved by restarting Docker daemon because Docker's clock stops syncing its time with computer's time when computer is in sleep and awaken again. I restarted Docker daemon with no change. Still the same error.
Problem solved by downloading rootkey.csv from AWS and moving it into ~/.aws
Docker instance is now uploaded onto AWS.
The issue is not with keys, so possible two reason
Your system Time is wrong
Invalid VPC ID
You should check your computer's clock maybe it's wrong, even though it it set to update "automatically from the internet." Try to Run the following will fix the computer's clock
sudo ntpdate ntp.ubuntu.com
Or run accordingly to your OS.
AWS was not able to validate the provided access credentials
The second reason seems like you are missing some flags in your command if time does not work then pls update the question with command.
VPC ID
We determine your default VPC ID at the start of a command. In some
cases, either because your account does not have a default vpc, or you
don’t want to use the default one, you can specify a vpc with the
--amazonec2-vpc-id flag.
Login to the AWS console Go to Services -> VPC -> Your VPCs. Locate
the VPC ID you want from the VPC column. Go to Services -> VPC ->
Subnets. Examine the Availability Zone column to verify that zone a
exists and matches your VPC ID. For example, us-east1-a is in the a
availability zone. If the a zone is not present, you can create a new
subnet in that zone or specify a different zone when you create the
machine.
To create a machine with a non-default VPC-ID:
docker-machine create --driver amazonec2 --amazonec2-access-key AKI******* --amazonec2-secret-key 8T93C********* --amazonec2-vpc-id vpc-****** aws02
This example assumes the VPC ID was found in the a availability zone. Use the--amazonec2-zone flag to specify a zone other than the a zone. For example,--amazonec2-zone c signifies us-east1-c.
docker-machine-with-aws-driver-amazon-web-services-from-docker-documentation
I'm now learning Google Cloud Platform instance creation. As part of learning, trying to launch RHEL 6 instance on a f1.micro instance-type in us-east1-b region.
Here's is the Gcloud command I've used:
gcloud compute --project=<project-id> instances create cldinit-vm --zone=us-east1-b --machine-type=f1-micro--subnet=default --network-tier=PREMIUM --metadata-from-file startup-script=initscript.sh --maintenance-policy=MIGRATE --service-account=<account-id>#developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append --min-cpu-platform="Intel Broadwell" --tags=http-server --image=rhel-6-v20181210 --image-project=rhel-cloud --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=cldinit-vm --labels=name=cloudinit-vm
When I run the command, it is showing the error below,
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- Invalid value for field 'resource.machineType': 'https://www.googleapis.com/compute/v1/projects/<project-id>/zones/us-east1-b/machineTypes/f1-micro--subnet=default'.
Machine type with name 'f1-micro--subnet=default' does not exist in zone 'us-east1-b'.
I've two questions:
I could not modify the Subnet settings from "default", as it is the only option available to choose from "network" in instance launching page.
So could anyone help to resolve the issue please?
Since I'm learning GCP, I've launched the CLI command into "CloudShell" directly from the link located at bottom of GCP compute engine - instance launching page.
Is there a correction needs to be done from "Google" to provide the working command ?
As part of learning, found that there was a missing space in between the option value f1-micro and --subnet.
So here is the corrected command snippet
gcloud compute --project=<project-id> instances create cldinit-vm --zone=us-east1-b --machine-type=f1-micro --subnet=default ....
I'm trying to add user data to my auto scaling on AWS.
When I setup my launch configuration through the web console on AWS I entered the following user data:
#!/bin/bash
echo $RANDOM > /home/ubuntu/clusterID
I had to base64 encode it, I did that with base64encode.org. The result:
IyEvYmluL2Jhc2gNCmVjaG8gJFJBTkRPTSA+IC9ob21lL3VidW50dS9jbHVzdGVySUQ=
When the ec2 instance launches I see the following error:
2015-02-24 07:50:08,754 - init.py[WARNING]: Unhandled
non-multipart userdata starting 'IyEvYmluL2Jhc2gNCmVjaG8g...'
Any ideas what I'm doing wrong?
Is your /home or /home/ubuntu a separate partition? If yes, you can check if the fs is mounted properly before the command executes.
I faced similar kind of issue 1.5 year back and it was the same mistake I mentioned....
ok..
seems the data passed in user-data does not have to be encoded(base64).
You can pass the user-data as it is and aws cli will pass this data to ec2 instance after encoding.