I am trying to create an ec2 spot instance using docker-machine. I tried to debug my command and found that it is failing at ssh.
I used ssh-keygen to create my key pair, and i have uploaded the key-pair to aws.
I am able to connect to the instance using putty. Please help me solving this error!
Also i am running the docker-machine command from an ec2 instance.
docker-machine version:
docker-machine version 0.13.0, build 9ba6da9
Command:
docker-machine --debug create --driver amazonec2 --amazonec2-access-key xxxxxxxxx --amazonec2-secret-key xxxxxxxxx --amazonec2-ssh-user ubuntu --amazonec2-region us-east-1 --amazonec2-instance-type t2.large --amazonec2-ami ami-xxxxx--amazonec2-vpc-id vpc-xxxxx--amazonec2-subnet-id subnet-xxxx--amazonec2-zone a --amazonec2-root-size 32 --amazonec2-keypair-name id_rsa --amazonec2-ssh-keypath $HOME/.ssh/id_rsa --amazonec2-request-spot-instance --amazonec2-security-group dev --amazonec2-private-address-only --amazonec2-spot-price x.xx dev4
Error:
Using SSH client type: external
Using SSH private key: /home/centos/.docker/machine/machines/dev4/id_rsa (-rw-------)
&{[-F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none ubuntu#xx.xxx.x.xx -o IdentitiesOnly=yes -i /home/centos/.docker/machine/machines/dev4/id_rsa -p 22] /usr/bin/ssh <nil>}
About to run SSH command:
exit 0
SSH cmd err, output: exit status 255:
Error getting ssh command 'exit 0' : ssh command error:
command : exit 0
err : exit status 255
output :
docker-machine inspect dev4
{
"ConfigVersion": 3,
"Driver": {
"IPAddress": "xx.xxx.xx.xx",
"MachineName": "dev4",
"SSHUser": "ubuntu",
"SSHPort": 22,
"SSHKeyPath": "/home/centos/.docker/machine/machines/dev4/id_rsa",
"StorePath": "/home/centos/.docker/machine",
"SwarmMaster": false,
"SwarmHost": "tcp://0.0.0.0:3376",
"SwarmDiscovery": "",
"Id": "xxxxxxxxxxxxxxxxxxxxxxxxxxx",
"AccessKey": "xxxxxxxxxx",
"SecretKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxx",
"SessionToken": "",
"Region": "us-east-1",
"AMI": "ami-xxxx",
"SSHKeyID": 0,
"ExistingKey": true,
"KeyName": "id_rsa",
"InstanceId": "i-xxxxxxxxxxxxxxxxxxxx",
"InstanceType": "t2.large",
"PrivateIPAddress": "xx.xxx.xx.xx",
"SecurityGroupId": "",
"SecurityGroupIds": [
"sg-xxxxxx"
],
"SecurityGroupName": "",
"SecurityGroupNames": [
"dev"
],
"OpenPorts": null,
"Tags": "",
"ReservationId": "",
"DeviceName": "/dev/sda1",
"RootSize": 32,
"VolumeType": "gp2",
"IamInstanceProfile": "",
"VpcId": "vpc-xxxxxx",
"SubnetId": "subnet-xxxxx",
"Zone": "a",
"RequestSpotInstance": true,
"SpotPrice": "x.xx",
"BlockDurationMinutes": 0,
"PrivateIPOnly": true,
"UsePrivateIP": false,
"UseEbsOptimizedInstance": false,
"Monitoring": false,
"SSHPrivateKeyPath": "/home/centos/.ssh/id_rsa",
"RetryCount": 5,
"Endpoint": "",
"DisableSSL": false,
"UserDataFile": ""
},
"DriverName": "amazonec2",
"HostOptions": {
"Driver": "",
"Memory": 0,
"Disk": 0,
"EngineOptions": {
"ArbitraryFlags": [],
"Dns": null,
"GraphDir": "",
"Env": [],
"Ipv6": false,
"InsecureRegistry": [],
"Labels": [],
"LogLevel": "",
"StorageDriver": "",
"SelinuxEnabled": false,
"TlsVerify": true,
"RegistryMirror": [],
"InstallURL": "https://get.docker.com"
},
"SwarmOptions": {
"IsSwarm": false,
"Address": "",
"Discovery": "",
"Agent": false,
"Master": false,
"Host": "tcp://0.0.0.0:3376",
"Image": "swarm:latest",
"Strategy": "spread",
"Heartbeat": 0,
"Overcommit": 0,
"ArbitraryFlags": [],
"ArbitraryJoinFlags": [],
"Env": null,
"IsExperimental": false
},
"AuthOptions": {
"CertDir": "/home/centos/.docker/machine/certs",
"CaCertPath": "/home/centos/.docker/machine/certs/ca.pem",
"CaPrivateKeyPath": "/home/centos/.docker/machine/certs/ca-key.pem",
"CaCertRemotePath": "",
"ServerCertPath": "/home/centos/.docker/machine/machines/dev4/server.pem",
"ServerKeyPath": "/home/centos/.docker/machine/machines/dev4/server-key.pem",
"ClientKeyPath": "/home/centos/.docker/machine/certs/key.pem",
"ServerCertRemotePath": "",
"ServerKeyRemotePath": "",
"ClientCertPath": "/home/centos/.docker/machine/certs/cert.pem",
"ServerCertSANs": [],
"StorePath": "/home/centos/.docker/machine/machines/dev4"
}
},
"Name": "dev4"
}
Related
According to the AWS CLI docs, aws rds stop-db-cluster command returns an output containing the attribute "AutomaticRestartTime". But when I run the command, the returned output does not contain that attribute.
Command executed:
aws rds stop-db-cluster --db-cluster-identifier xxxxxxxxxxxxxxx --output json
Returned output:
{
"DBCluster": {
"AllocatedStorage": 1,
"AvailabilityZones": [
"us-east-1c",
"us-east-1b",
"us-east-1a"
],
"BackupRetentionPeriod": 7,
"DBClusterIdentifier": "xxxxxxxxxxxxxxx",
"DBClusterParameterGroup": "jjjjjjjjjjjjj",
"DBSubnetGroup": "xxxxxx-subnets-4839849389098",
"Status": "available",
"EarliestRestorableTime": "2022-08-04T05:02:13.522000+00:00",
"Endpoint": "xxxxxxxxxxxxx.cluster-cjdlcwljcnljwd.us-east-1.rds.amazonaws.com",
"ReaderEndpoint": "xxxxxxxxxx.cluster-ro-hjdhjhjhjhj.us-east-1.rds.amazonaws.com",
"MultiAZ": false,
"Engine": "aurora-mysql",
"EngineVersion": "5.7.mysql_aurora.2.10.2",
"LatestRestorableTime": "2022-08-11T06:27:19.824000+00:00",
"Port": 3306,
"MasterUsername": "yyyyyyyy",
"PreferredBackupWindow": "05:00-06:30",
"PreferredMaintenanceWindow": "sun:07:00-sun:09:30",
"ReadReplicaIdentifiers": [],
"DBClusterMembers": [
{
"DBInstanceIdentifier": "xxxxxxxxx",
"IsClusterWriter": true,
"DBClusterParameterGroupStatus": "in-sync",
"PromotionTier": 0
}
],
"VpcSecurityGroups": [
{
"VpcSecurityGroupId": "sg-0aj909bc",
"Status": "active"
}
],
"HostedZoneId": "JSKDJLKDKLDLK",
"StorageEncrypted": true,
"KmsKeyId": "arn:aws:kms:us-east-1:000000000000:key/hcdkjchjdhckjwhckj",
"DbClusterResourceId": "cluster-gggggggggg",
"DBClusterArn": "arn:aws:rds:us-east-1:000000000000:cluster:xxxxxxxxx",
"AssociatedRoles": [],
"IAMDatabaseAuthenticationEnabled": true,
"ClusterCreateTime": "2019-02-19T17:29:52.223000+00:00",
"EngineMode": "provisioned",
"DeletionProtection": false,
"HttpEndpointEnabled": false,
"CopyTagsToSnapshot": false,
"CrossAccountClone": false,
"DomainMemberships": []
}
]
}
What am I doing wrong here?
I have an t2.2xlarge AWS EC2 instance that i need to change it's type to t3.2xlarge.
But when i try to start it i get an
"Error starting instances The requested configuration is currently not
supported. Please check the documentation for supported
configurations."
When i run the check script everything is fine
https://github.com/awslabs/aws-support-tools/tree/master/EC2/NitroInstanceChecks
OK NVMe Module is installed and available on your instance
OK ENA Module with version is installed and available on your instance
OK fstab file looks fine and does not contain any device names.
And i also did all the checks described here
https://aws.amazon.com/premiumsupport/knowledge-center/boot-error-linux-nitro-instance/
aws ec2 describe-instances --instance-ids my-instance-id --query "Reservations[].Instances[].EnaSupport"
[
true
]
Is there anything else i should change to be able to start it as t3.2xlarge?
To reproduce:
Create an t2.2xlarge instance with default settings
Stop it and change type to t3.2xlarge
Try to start it
More detailed info about instance
aws ec2 describe-instances
{
"Reservations": [
{
"Groups": [],
"Instances": [
{
"AmiLaunchIndex": 0,
"ImageId": "ami-***********",
"InstanceId": "i-***********",
"InstanceType": "t2.2xlarge",
"KeyName": "***********",
"LaunchTime": "2020-11-24T06:11:41+00:00",
"Monitoring": {
"State": "disabled"
},
"Placement": {
"AvailabilityZone": "us-east-1e",
"GroupName": "",
"Tenancy": "default"
},
"PrivateDnsName": "ip-***********.ec2.internal",
"PrivateIpAddress": "***********",
"ProductCodes": [],
"PublicDnsName": "ec2-***********.compute-1.amazonaws.com",
"PublicIpAddress": "***********",
"State": {
"Code": 16,
"Name": "running"
},
"StateTransitionReason": "",
"SubnetId": "subnet-***********",
"VpcId": "vpc-***********",
"Architecture": "x86_64",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"AttachTime": "2020-10-06T05:07:35+00:00",
"DeleteOnTermination": true,
"Status": "attached",
"VolumeId": "vol-***********"
}
}
],
"ClientToken": "",
"EbsOptimized": false,
"EnaSupport": true,
"Hypervisor": "xen",
"NetworkInterfaces": [
{
"Association": {
"IpOwnerId": "amazon",
"PublicDnsName": "***********.compute-1.amazonaws.com",
"PublicIp": "***********"
},
"Attachment": {
"AttachTime": "2020-10-06T05:07:34+00:00",
"AttachmentId": "eni-attach-***********",
"DeleteOnTermination": true,
"DeviceIndex": 0,
"Status": "attached",
"NetworkCardIndex": 0
},
"Description": "",
"Groups": [
{
"GroupName": "launch-wizard-1",
"GroupId": "sg-***********"
}
],
"Ipv6Addresses": [],
"MacAddress": "***********",
"NetworkInterfaceId": "eni-***********",
"OwnerId": "***********",
"PrivateDnsName": "ip-***********.ec2.internal",
"PrivateIpAddress": "***********",
"PrivateIpAddresses": [
{
"Association": {
"IpOwnerId": "amazon",
"PublicDnsName": "ec2-***********.compute-1.amazonaws.com",
"PublicIp": "***********"
},
"Primary": true,
"PrivateDnsName": "ip-***********.ec2.internal",
"PrivateIpAddress": "***********"
}
],
"SourceDestCheck": true,
"Status": "in-use",
"SubnetId": "subnet-***********",
"VpcId": "vpc-***********",
"InterfaceType": "interface"
}
],
"RootDeviceName": "/dev/sda1",
"RootDeviceType": "ebs",
"SecurityGroups": [
{
"GroupName": "launch-wizard-1",
"GroupId": "sg-***********"
}
],
"SourceDestCheck": true,
"Tags": [
{
"Key": "Name",
"Value": ""
}
],
"VirtualizationType": "hvm",
"CpuOptions": {
"CoreCount": 8,
"ThreadsPerCore": 1
},
"CapacityReservationSpecification": {
"CapacityReservationPreference": "open"
},
"HibernationOptions": {
"Configured": false
},
"MetadataOptions": {
"State": "applied",
"HttpTokens": "optional",
"HttpPutResponseHopLimit": 1,
"HttpEndpoint": "enabled"
},
"EnclaveOptions": {
"Enabled": false
}
}
],
"OwnerId": "***********",
"ReservationId": "r-***********"
}
]
}
I tried to launch a t3.2xlarge in us-east-1e and got the following error:
Your requested instance type (t3.2xlarge) is not supported in your requested Availability Zone (us-east-1e). Please retry your request by not specifying an Availability Zone or choosing us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1f.
AWS probably doesn't have t3.2xlarge instances available in this AZ.
My packer script give error
{
"variables":
{
"aws_access_key": "",
"aws_secret_key": "",
"revision": "0",
"ansible_host":""
},
"builders":[{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-2",
"instance_type": "t2.micro",
"source_ami": "ami-09e1c6dd3bd60cf2e",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*",
"root-device-type": "ebs"
}},
"ssh_username": "ubuntu",
"ami_name":"honebackend {{ isotime | clean_ami_name }}"
}],
"provisioners":[
{
"type":"shell",
"script":"scripts/ssh_agent.sh"
},
{
"type": "shell",
"execute_command": "mkdir /var/apps"
},
{
"type":"ansible",
"extra_arguments": [ "-vvv --extra-vars 'ansible_host={{user `host`}} ../ansible/hosts.ini ansible_python_interpreter=/usr/bin/python3"],
"inventory_file": "../ansible/hosts.ini",
"playbook_file":"../ansible/nodejs.yml"
}
]
}
after running following command:
packer build -debug -var 'aws_access_key=XXXXXXXXXXXXXXXXXXXXXXX' -var
'aws_secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' packer.json
actual result is :
Debug mode enabled. Builds will not be parallelized.
amazon-ebs output will be in this color.
1 error(s) occurred:
* Either a script file or inline script must be specified.
What have I did wrong here?
As the error says:
{
"type": "shell",
"execute_command": "mkdir /var/apps"
},
Should really be:
{
"type": "shell",
"inline": "mkdir /var/apps"
},
The ec2-remote-facts module works correctly when I do not run it on Ansible Tower. The first example below (not using Tower) includes all of the block_device_mapping information that I use in subsequent tasks.
This is a big issue I were to use Tower in the long run. My code is the same for both examples. Any thoughts that could lead me in the right direction.
My only thought is that since it is not a core module, Ansible Tower is not synced to the module's most recent code perfectly. But I am baffled. Thanks!
Ansible Version - ansible 2.2.0.0 (running on Ubuntu)
Ansible Tower Version - Tower Version 3.0.3 (running on Centos)
---examples below----
-Ansible (not using Tower)-
ok: [localhost -> localhost] => {
"changed": false,
"instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": [
{
"attach_time": "2017-01-13T17:05:31.000Z",
"delete_on_termination": false,
"device_name": "/dev/sdb",
"status": "attached",
"volume_id": "vol-132312313212313"
},
{
"attach_time": "2017-01-13T17:05:31.000Z",
"delete_on_termination": true,
"device_name": "/dev/sda1",
"status": "attached",
"volume_id": "vol-123123123123"
},
{
"attach_time": "2017-01-13T17:05:31.000Z",
"delete_on_termination": false,
"device_name": "/dev/sdc",
"status": "attached",
"volume_id": "vol-123123123123"
}
],
"client_token": "",
"ebs_optimized": false,
"groups": [
{
"id": "sg-12312313",
"name": "n123123123
}
],
"hypervisor": "xen",
"id": "i-123123123123",
"image_id": "ami-123123123123",
"instance_profile": null,
"interfaces": [
{
"id": "eni-123123123",
"mac_address": "123123123"
}
],
"kernel": null,
"key_name": "my-v123123",
"launch_time": "2017-01-13T17:05:30.000Z",
"monitoring_state": "disabled",
"persistent": false,
"placement": {
"tenancy": "default",
"zone": "us-east-1b"
},
"private_dns_name": "ip-112312312",
"private_ip_address": "10.1.1.4",
"public_dns_name": "",
"public_ip_address": null,
"ramdisk": null,
"region": "us-east-1",
"requester_id": null,
"root_device_type": "ebs",
"source_destination_check": "true",
"spot_instance_request_id": null,
"state": "running",
"tags": {
"CurrentIP": "10.1.1.1.4",
"Name": "d1",
"Type": "d2"
},
"virtualization_type": "hvm",
"vpc_id": "vpc-123123123"
},
Ansible Tower (notice that its missing the block_device_mapping block of code)
TASK [debug] **********************
ok: [localhost] => {
"db_id.instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"client_token": "",
"ebs_optimized": false,
"groups": [
{
"id": "sg-123123",
"name": "n123123123"
}
],
"hypervisor": "xen",
"id": "i-123123123",
"image_id": "ami-123123",
"instance_profile": null,
"interfaces": [
{
"id": "eni-123123123",
"mac_address": "123123123"
}
],
"kernel": null,
"key_name": "m123123",
"launch_time": "2017-01-13T17:05:30.000Z",
"monitoring_state": "disabled",
"persistent": false,
"placement": {
"tenancy": "default",
"zone": "us-east-1b"
},
"private_dns_name": "ip-1123123123123",
"private_ip_address": "10.1.1.4",
"public_dns_name": "",
"ramdisk": null,
"region": "us-east-1",
"requester_id": null,
"root_device_type": "ebs",
"source_destination_check": "true",
"spot_instance_request_id": null,
"state": "running",
"tags": {
"Name": "123123",
"Type": "123123"
},
"virtualization_type": "hvm",
"vpc_id": "vpc-123123123"
},
I guess you indeed have old Ansible version on your Tower box.
As of today, official Ansible Tower Vagrant box (ansible/tower (virtualbox, 3.0.3)) has ver 2.1.2 inside:
[vagrant#ansible-tower ~]$ ansible --version
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
And ec2_remote_facts has no block_device_mapping in this version.
So update Ansible on your Tower box or apply this patch.
I'm trying to create read replica using the following command:
aws rds create-db-instance-read-replica --db-instance-identifier dbname-read --source-db-instance-identifier dbname --availability-zone us-east-1c
I'm getting the following error:
A client error (InvalidDBInstanceState) occurred when calling the CreateDBInstanceReadReplica operation: Automated backups are not enabled for this database instance. To enable automated backups, use ModifyDBInstance to set the backup retention period to a non-zero value.
I checked and the cluster is configured with automatic backups:
{
"DBInstances": [
{
"PubliclyAccessible": false,
"MasterUsername": "root",
"LicenseModel": "general-public-license",
"VpcSecurityGroups": [
{
"Status": "active",
"VpcSecurityGroupId": "sg"
}
],
"InstanceCreateTime": "2015-12-20T02:38:26.179Z",
"CopyTagsToSnapshot": false,
"OptionGroupMemberships": [
{
"Status": "in-sync",
"OptionGroupName": "default:aurora-5-6"
}
],
"PendingModifiedValues": {},
"Engine": "aurora",
"MultiAZ": false,
"DBSecurityGroups": [],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.aurora5.6",
"ParameterApplyStatus": "in-sync"
}
],
"AutoMinorVersionUpgrade": true,
"PreferredBackupWindow": "03:44-04:14",
"DBSubnetGroup": {
"Subnets": [
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet",
"SubnetAvailabilityZone": {
"Name": "us-east-1a"
}
},
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet",
"SubnetAvailabilityZone": {
"Name": "us-east-1c"
}
}
],
"DBSubnetGroupName": "dev-subnet-group",
"VpcId": "vpc",
"DBSubnetGroupDescription": "dev-subnet-group",
"SubnetGroupStatus": "Complete"
},
"ReadReplicaDBInstanceIdentifiers": [],
"AllocatedStorage": 1,
*"BackupRetentionPeriod": 7,*
"PreferredMaintenanceWindow": "mon:10:11-mon:10:41",
"Endpoint": {
"Port": 3306,
"Address": "dbname.us-east-1.rds.amazonaws.com"
},
"DBInstanceStatus": "available",
"EngineVersion": "5.6.10a",
"AvailabilityZone": "us-east-1a",
"DBClusterIdentifier": "dbname",
"StorageType": "aurora",
"DbiResourceId": "db-**********",
"CACertificateIdentifier": "rds-ca-2015",
"StorageEncrypted": false,
"DBInstanceClass": "db.r3.large",
"DbInstancePort": 0,
"DBInstanceIdentifier": "dbname"
}
]
}
Any idea?
Thanks,
Roey
Aurora engine doesn't support
create-db-instance-read-replica
instead just creating another instance using
create-db-instance
with the option --db-cluster-identifier.
So the newly created instance will automatically sync with the writer/master will be promoted to read only automatically.