Vagrant AWS error when specifying a VPC - amazon-web-services

I'm using Vagrant & ChefSolo to setup a server in AWS. When setting up a classic instance it works fine, however when I attempt to specify a VPC (through the subnet Id) and specify an ec2 instance type that is only available within a VPC the vagrant up call results in the error below. Does anyone know how to get Vagrant to work with VPCs?
==> default: Launching an instance with the following settings...
==> default: -- Type: t2.micro
==> default: -- AMI: ami-96401ce1
==> default: -- Region: eu-west-1
==> default: -- Keypair: vagrant-key-pair-eu-west-1
==> default: -- IAM Instance Profile Name: DEV-config-ipython
==> default: -- Security Groups: ["ssh-only-from-anywhere", "http-from-me"]
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Assigning a public IP address in a VPC: false
There was an error talking to AWS. The error message is shown
below:
VPCResourceNotSpecified => The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.
My Vagrant file is as follows...
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu_aws"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
#config.vm.synced_folder "../.", "/vagrant", id: "vagrant-root"
config.omnibus.chef_version = :latest
config.vm.provider :aws do |aws, override|
aws.region = "eu-west-1"
aws.security_groups = [ 'ssh-only-from-me', 'http-from-me' ]
aws.access_key_id = ENV['AWS_ACCESS_KEY_ID']
aws.secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
aws.keypair_name = ENV['AWS_KEYPAIR_NAME']
#aws.instance_type = "m3.medium"
aws.instance_type = "t2.micro"
aws.ami = "ami-96401ce1" ## Ubuntu 14 LTS on HVM
subnet_id = "subnet-ed9cd588" # vagrantVPC publicSubnet
associate_public_ip = true
override.ssh.username = "ubuntu"
override.ssh.private_key_path = ENV['MY_PRIVATE_AWS_SSH_KEY_PATH']
aws.iam_instance_profile_name = 'DEV-config-ipython'
aws.tags = {
'Name' => 'ipython',
'env' => 'DEV',
'application' => 'test',
}
end
config.vm.provision :chef_solo do |chef|
config.berkshelf.enabled = true
chef.data_bags_path = "./data_bags"
chef.custom_config_path = "Vagrantfile.chef"
chef.json = {
'java' => {
"install_flavor" => "oracle",
"jdk_version" => "8",
"oracle" => {
"accept_oracle_download_terms" => true
}
}
}
chef.add_recipe "apt"
chef.add_recipe "build-essential"
#chef.add_recipe "serverTest"
chef.add_recipe "java"
chef.add_recipe "gson"
chef.add_recipe "log4j"
chef.add_recipe "maven"
chef.add_recipe "aws-sdk-cookbook"
chef.add_recipe "cron"
chef.add_recipe "awscli"
chef.add_recipe "maven"
end
end

should it maybe be:
aws.subnet_id = "subnet-ed9cd588" # vagrantVPC publicSubnet
aws.associate_public_ip = true

Related

Terraform aws_eks_node_group creation error with launch_template "Unsupported - The requested configuration is currently not supported"

Objective of my effort: Create EKS node with Custom AMI(ubuntu)
Issue Statement: On creating aws_eks_node_group along with launch_template, I am getting an error:
Error: error waiting for EKS Node Group (qa-svr-centinela-eks-cluster01:qa-svr-centinela-nodegroup01) creation: AsgInstanceLaunchFailures: Could not launch On-Demand Instances. Unsupported - The requested configuration is currently not supported. Please check the documentation for supported configurations. Launching EC2 instance failed.. Resource IDs: [eks-82bb24f0-2d7e-ba9d-a80a-bb9653cde0c6]
Research so far: As per AWS we can start using custom AMIs for EKS.
Now the custom ubuntu image that I am using, is built with Packer, and I was encrypting the boot, and using AWS KMS External key for that purpose. At first I thought maybe the encryption used for the AMI is causing problem. So I removed the encryption for the AMI from the packer code.
But it didn't resolve the issue. Maybe I am not thinking in the right direction?
Any help is much appreciated. Thanks.
Terraform code used is in the post below.
I am attempting to create an EKS node group with launch template. But getting into an error.
packer code
source "amazon-ebs" "ubuntu18" {
ami_name = "pxx3"
ami_virtualization_type = "hvm"
tags = {
"cc" = "sxx1"
"Name" = "packerxx3"
}
region = "us-west-2"
instance_type = "t3.small"
# AWS Ubuntu AMI
source_ami = "ami-0ac73f33a1888c64a"
associate_public_ip_address = true
ebs_optimized = true
# public subnet
subnet_id = "subnet-xx"
vpc_id = "vpc-xx"
communicator = "ssh"
ssh_username = "ubuntu"
}
build {
sources = [
"source.amazon-ebs.ubuntu18"
]
provisioner "ansible" {
playbook_file = "./ubuntu.yml"
}
}
ubuntu.yml - only used for installing a few libraries
---
- hosts: default
gather_facts: no
become: yes
tasks:
- name: create the license key for new relic agent
shell: |
curl -s https://download.newrelic.com/infrastructure_agent/gpg/newrelic-infra.gpg | apt-key add - && \
printf "deb [arch=amd64] https://download.newrelic.com/infrastructure_agent/linux/apt bionic main" | tee -a /etc/apt/sources.list.d/newrelic-infra.list
- name: check sources.list
shell: |
cat /etc/apt/sources.list.d/newrelic-infra.list
- name: apt-get update
apt: update_cache=yes force_apt_get=yes
- name: install new relic agent
package:
name: newrelic-infra
state: present
- name: update apt-get repo and cache
apt: update_cache=yes force_apt_get=yes
- name: apt-get upgrade
apt: upgrade=dist force_apt_get=yes
- name: install essential softwares
package:
name: "{{ item }}"
state: latest
loop:
- software-properties-common
- vim
- nano
- glibc-source
- groff
- less
- traceroute
- whois
- telnet
- dnsutils
- git
- mlocate
- htop
- zip
- unzip
- curl
- ruby-full
- wget
ignore_errors: yes
- name: Add the ansible PPA to your system’s sources list
apt_repository:
repo: ppa:ansible/ansible
state: present
mode: 0666
- name: Add the deadsnakes PPA to your system’s sources list
apt_repository:
repo: ppa:deadsnakes/ppa
state: present
mode: 0666
- name: install softwares
package:
name: "{{ item }}"
state: present
loop:
- ansible
- python3.8
- python3-winrm
ignore_errors: yes
- name: install AWS CLI
shell: |
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install
aws_eks_node_group configuration.
resource "aws_eks_node_group" "nodegrp" {
cluster_name = aws_eks_cluster.eks.name
node_group_name = "xyz-nodegroup01"
node_role_arn = aws_iam_role.eksnode.arn
subnet_ids = [data.aws_subnet.tf_subnet_private01.id, data.aws_subnet.tf_subnet_private02.id]
scaling_config {
desired_size = 2
max_size = 2
min_size = 2
}
depends_on = [
aws_iam_role_policy_attachment.nodepolicy01,
aws_iam_role_policy_attachment.nodepolicy02,
aws_iam_role_policy_attachment.nodepolicy03
]
launch_template {
id = aws_launch_template.eks.id
version = aws_launch_template.eks.latest_version
}
}
aws_launch_template configuration.
resource "aws_launch_template" "eks" {
name = "${var.env}-launch-template"
update_default_version = true
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 50
}
}
credit_specification {
cpu_credits = "standard"
}
ebs_optimized = true
# AMI generated with packer (is private)
image_id = "ami-0ac71233a184566453"
instance_type = t3.micro
key_name = "xyz"
network_interfaces {
associate_public_ip_address = false
}
}

Why can't Terraform SSH into my EC2 instance?

I am trying to ssh into a newly created EC2 instance with terraform. My host is Windows 10 and I have no problems SSHing into the instance using Bitvise SSH Client from my host but Terraform can't seem to SSH in to create a directory on the instance:
My main.tf:
provider "aws" {
region = "us-west-2"
}
resource "aws_security_group" "instance" {
name = "inlets-server-instance"
description = "Security group for the inlets server"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "tunnel" {
ami = "ami-07b4f3c02c7f83d59"
instance_type = "t2.nano"
key_name = "${var.key_name}"
vpc_security_group_ids = [aws_security_group.instance.id]
tags = {
Name = "inlets-server"
}
provisioner "local-exec" {
command = "echo ${aws_instance.tunnel.public_ip} > ${var.public_ip_path}"
}
provisioner "remote-exec" {
inline = [
"mkdir /home/${var.ssh_user}/ansible",
]
connection {
type = "ssh"
host = "${file("${var.public_ip_path}")}"
user = "${var.ssh_user}"
private_key = "${file("${var.private_key_path}")}"
timeout = "1m"
agent = false
}
}
}
My variables.tf:
variable "key_name" {
description = "Name of the SSH key pair generated in Amazon EC2."
default = "aws_ssh_key"
}
variable "public_ip_path" {
description = "Path to the file that contains the instance's public IP address"
default = "ip_address.txt"
}
variable "private_key_path" {
description = "Path to the private SSH key, used to access the instance."
default = "aws_ssh_key.pem"
}
variable "ssh_user" {
description = "SSH user name to connect to your instance."
default = "ubuntu"
}
All I get are attempted connections:
aws_instance.tunnel (remote-exec): Connecting to remote host via SSH...
aws_instance.tunnel (remote-exec): Host: XX.XXX.XXX.XXX
aws_instance.tunnel (remote-exec): User: ubuntu
aws_instance.tunnel (remote-exec): Password: false
aws_instance.tunnel (remote-exec): Private key: true
aws_instance.tunnel (remote-exec): Certificate: false
aws_instance.tunnel (remote-exec): SSH Agent: false
aws_instance.tunnel (remote-exec): Checking Host Key: false
and it finally timeouts with:
Error: timeout - last error: dial tcp: lookup XX.XXX.XXX.XXX
: no such host
Any ideas?
You didn't talk about your network structure.
Is your win10 machine inside the VPC? If not, do you have internet gateway, routing table, NAT gateway properly set up?
It would be cleaner and safer to create an Elastic IP resource to access the IP address of your machine with terraform knowledge instead of trying to get it from the machine. Surely, the local exec will be quicker than the remote exec but you create an implicit dependency that might generate problems.

Cannot provision aws_spot_instance via terraform

I am attempting to spin up a spot instance via terraform. When I try to use a provisioner block (either "remote-exec" or "file"), it fails and I see an SSH error in DEBUG level output. When I switch from a spot instance request to a standard aws instance resource declaration, the provisioning works fine.
Code not working:
resource "aws_spot_instance_request" "worker01" {
ami = "ami-0cb95574"
spot_price = "0.02"
instance_type = "m3.medium"
vpc_security_group_ids = [ "${aws_security_group.ssh_access.id}", "${aws_security_group.tcp_internal_access.id}","${aws_security_group.splunk_access.id}","${aws_security_group.internet_access.id}" ]
subnet_id = "..."
associate_public_ip_address = true
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("${var.private_key_path}")}"
}
provisioner "remote-exec" {
inline = [
"touch foo",
]
}
}
Error:
aws_spot_instance_request.worker01 (remote-exec): Connecting to remote host via SSH...
aws_spot_instance_request.worker01 (remote-exec): Host:
aws_spot_instance_request.worker01 (remote-exec): User: ec2-user
2017/09/01 16:17:52 [DEBUG] plugin: terraform: remote-exec-provisioner (internal) 2017/09/01 16:17:52 handshaking with SSH
aws_spot_instance_request.worker01 (remote-exec): Password: false
aws_spot_instance_request.worker01 (remote-exec): Private key: true
aws_spot_instance_request.worker01 (remote-exec): SSH Agent: true
2017/09/01 16:17:52 [DEBUG] plugin: terraform: remote-exec-provisioner (internal) 2017/09/01 16:17:52 handshake error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
2017/09/01 16:17:52 [DEBUG] plugin: terraform: remote-exec-provisioner (internal) 2017/09/01 16:17:52 Retryable error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Working code:
resource "aws_instance" "worker01" {
ami = "ami-0cb95574"
instance_type = "m3.medium"
vpc_security_group_ids = [ "${aws_security_group.ssh_access.id}", "${aws_security_group.tcp_internal_access.id}","${aws_security_group.splunk_access.id}","${aws_security_group.internet_access.id}" ]
subnet_id = "..."
associate_public_ip_address = true
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("${var.private_key_path}")}"
}
provisioner "remote-exec" {
inline = [
"touch foo",
]
}
}
I have tried a few different iterations of the non-working code (including an silly attempt to hard-code a public ip for a spot instance and an attempted self-reference to the spot instances public ip - which gave an no such attribute error). Unfortunately, I could not find anyone with similar issues via google. From what I have read, I should be able to provision a spot instance in this manner.
Thanks for any help you can provide.
You need to add wait_for_fulfillment = true to your spot instance request or the resource will return before the instance is created.

vagrant-aws: AWS was not able to validate the provided access credentials

I am on Windows 10, and using vagrant-aws (https://github.com/mitchellh/vagrant-aws) to vagrant up and amazon instance, and getting the following error. I have listed my Vagrant file as well.
Also, some people reported this might be caused by the time. I have synced system time on windows 10, but still no luck!
$ vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
C:/Users/jacky/.vagrant.d/gems/gems/vagrant-aws-0.7.0/lib/vagrant-aws/action/run_instance.rb:98: warning: duplicated key at line 100 ignored: :associate_public_ip
==> default: Warning! The AWS provider doesn't support any of the Vagrant
==> default: high-level network configurations (`config.vm.network`). They
==> default: will be silently ignored.
==> default: Launching an instance with the following settings...
==> default: -- Type: m3.medium
==> default: -- AMI: ami-42116522
==> default: -- Region: us-west-1
==> default: -- Keypair: 2016_05_14_keypair
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Source Destination check:
==> default: -- Assigning a public IP address in a VPC: false
==> default: -- VPC tenancy specification: default
There was an error talking to AWS. The error message is shown
below:
AuthFailure => AWS was not able to validate the provided access credentials
Vagrang file:
Vagrant.configure("2") do |config|
config.vm.box = "dummy"
config.vm.provider :aws do |aws, override|
aws.access_key_id = "..."
aws.secret_access_key = "..."
aws.session_token = "..."
aws.keypair_name = "2016_05_14_keypair"
aws.ami = "ami-42116522"
aws.region = "us-west-1"
#aws.instance_type = "t2.small"
override.ssh.username = "ubuntu"
override.ssh.private_key_path = "C:/2016_05_14_keypair.pem"
end
end
I know this may be a bit late for you. I've had the same issue as you with my Vagrant file identical to your's and I've resolved it by removing the "aws.session_token = " line.
Mine was a simpler solution. I Capitalized "US" in the region name - it was case sensitive - doh !

Errors occurred wher using Chef-solo+vagran+AWS+ Mac OS X

I intended to using Vagrant,Chef-solo to establish a AWS environment.But I got some errors that I can not solve.Anybody can help me?
The steps I used:
Install all necessary environment on Mac OS X: such as vagrant, vagrant plugin, virtual box, chef, chef plugin and so on.
Download vagrant configuration files:
git clone https://github.com/ICTatRTI/ict-chef-repo
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
#config.vm.box_url = "https://opscode-vm.s3.amazonaws.com/vagrant/opscode_ubuntu-12.04_chef-11.2.0.box"
#config.vm.box = "opscode-ubuntu-1204"
config.vm.box = "dummy"
config.vm.network :forwarded_port, guest: 80, host: 8888
config.vm.network :forwarded_port, guest: 3306, host: 3333
config.ssh.username = "ubuntu"
config.vm.provider :aws do |aws, override|
#config.vm.provider :aws do |aws|
aws.access_key_id = 'XXXXXXXXXXXXXXXQ'
aws.secret_access_key = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
aws.keypair_name = "usr-aws-2013"
aws.availability_zone = "us-west-2c"
aws.instance_type = "t1.micro"
aws.region = "us-west-2"
aws.ami = "ami-0849a03f"
aws.security_groups = ['quicklaunch-1']
aws.tags = {
'Name' => 'tong',
'Description' => 'vagrant test'
}
override.ssh.private_key_path = "~/.ssh/usr-aws-2013.pem"
override.ssh.username = "ubuntu"
end
config.vm.provision :chef_solo do |chef|
chef.node_name = 'base'
chef.cookbooks_path = "./cookbooks"
chef.roles_path = "./roles"
chef.add_role "base"
chef.add_role "ushahidi"
end
end
Run:
vagrant up --provider=aws
Got the following errors
Bringing machine 'default' up with 'aws' provider...
WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1
[default] Warning! The AWS provider doesn't support any of the Vagrant
high-level network configurations (`config.vm.network`). They
will be silently ignored.
[default] Launching an instance with the following settings...
[default] -- Type: t1.micro
[default] -- AMI: ami-0849a03f
[default] -- Region: us-west-2
[default] -- Availability Zone: us-west-2c
[default] -- Keypair: usr-aws-2013
[default] -- Security Groups: ["quicklaunch-1"]
[default] -- Block Device Mapping: []
[default] -- Terminate On Shutdown: false
An error occurred while executing multiple actions in parallel.
Any errors that occurred are shown below.
An unexpected error ocurred when executing the action on the
'default' machine. Please report this as a bug:
The image id '[ami-0849a03f]' does not exist
Instance and AMI are different things and they have different numbers too. So if you have i-bddcf889 you cannot reference it in your Vagrantfile as ami-bddcf889.
Instead you don't have to create/start instance manually - you must provide ami from which Vagrant will create instance itself. For example take the one you made instance manually from.