I intended to using Vagrant,Chef-solo to establish a AWS environment.But I got some errors that I can not solve.Anybody can help me?
The steps I used:
Install all necessary environment on Mac OS X: such as vagrant, vagrant plugin, virtual box, chef, chef plugin and so on.
Download vagrant configuration files:
git clone https://github.com/ICTatRTI/ict-chef-repo
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
#config.vm.box_url = "https://opscode-vm.s3.amazonaws.com/vagrant/opscode_ubuntu-12.04_chef-11.2.0.box"
#config.vm.box = "opscode-ubuntu-1204"
config.vm.box = "dummy"
config.vm.network :forwarded_port, guest: 80, host: 8888
config.vm.network :forwarded_port, guest: 3306, host: 3333
config.ssh.username = "ubuntu"
config.vm.provider :aws do |aws, override|
#config.vm.provider :aws do |aws|
aws.access_key_id = 'XXXXXXXXXXXXXXXQ'
aws.secret_access_key = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
aws.keypair_name = "usr-aws-2013"
aws.availability_zone = "us-west-2c"
aws.instance_type = "t1.micro"
aws.region = "us-west-2"
aws.ami = "ami-0849a03f"
aws.security_groups = ['quicklaunch-1']
aws.tags = {
'Name' => 'tong',
'Description' => 'vagrant test'
}
override.ssh.private_key_path = "~/.ssh/usr-aws-2013.pem"
override.ssh.username = "ubuntu"
end
config.vm.provision :chef_solo do |chef|
chef.node_name = 'base'
chef.cookbooks_path = "./cookbooks"
chef.roles_path = "./roles"
chef.add_role "base"
chef.add_role "ushahidi"
end
end
Run:
vagrant up --provider=aws
Got the following errors
Bringing machine 'default' up with 'aws' provider...
WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1
[default] Warning! The AWS provider doesn't support any of the Vagrant
high-level network configurations (`config.vm.network`). They
will be silently ignored.
[default] Launching an instance with the following settings...
[default] -- Type: t1.micro
[default] -- AMI: ami-0849a03f
[default] -- Region: us-west-2
[default] -- Availability Zone: us-west-2c
[default] -- Keypair: usr-aws-2013
[default] -- Security Groups: ["quicklaunch-1"]
[default] -- Block Device Mapping: []
[default] -- Terminate On Shutdown: false
An error occurred while executing multiple actions in parallel.
Any errors that occurred are shown below.
An unexpected error ocurred when executing the action on the
'default' machine. Please report this as a bug:
The image id '[ami-0849a03f]' does not exist
Instance and AMI are different things and they have different numbers too. So if you have i-bddcf889 you cannot reference it in your Vagrantfile as ami-bddcf889.
Instead you don't have to create/start instance manually - you must provide ami from which Vagrant will create instance itself. For example take the one you made instance manually from.
Related
When I try to connect with aws-azure-login i get this error:
UnknownEndpoint: Inaccessible host: `sts.amazonaws.com' at port `undefined'. This service may not be available in the `us-east-1' region.
at Request.ENOTFOUND_ERROR (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\event_listeners.js:529:46)
at Request.callListeners (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\sequential_executor.js:106:20)
at Request.emit (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\sequential_executor.js:78:10)
at Request.emit (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\request.js:686:14)
at error (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\event_listeners.js:361:22)
at ClientRequest.<anonymous> (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\http\node.js:99:9)
at ClientRequest.emit (node:events:390:28)
at ClientRequest.emit (node:domain:475:12)
at TLSSocket.socketErrorListener (node:_http_client:447:9)
at TLSSocket.emit (node:events:390:28)
at TLSSocket.emit (node:domain:475:12)
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
code: 'UnknownEndpoint',
region: 'us-east-1',
But i want to connect to eu-west-3 instead of us-east-1, it seam that my configured region is never picked up.
> aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region eu-west-3 config-file ~/.aws/config
My ~/.aws/config file :
[default]
azure_tenant_id=d8f7***-**-**-9561de6
azure_app_id_uri=https://signin.aws.amazon.com/saml
azure_default_username=[my compagnie mail]
azure_default_role_arn=
azure_default_duration_hours=12
azure_default_remember_me=false
region=eu-west-3
[profile dev_dom_role]
role_arn=[ my arn role: arn:aws:iam::****:role/dev_dom_role]
source_profile=default
azure_tenant_id=d8f7***-**-**-9561de6
azure_app_id_uri=https://signin.aws.amazon.com/saml
azure_default_username=[my compagnie mail]
azure_default_role_arn=[ my arn role: arn:aws:iam::****:role/dev_dom_role]
azure_default_duration_hours=12
azure_default_remember_me=false
When i try to configure my profile with aws-azure-login --configure -p default every informations is well reconize but unfortunaly it didn't ask for region.
How i connecting ? i try with both role, dev_dom_role and default role :
aws-azure-login --mode=gui --profile dev_dom_role
aws-azure-login --mode=gui
sts.amazonaws.com wasn't reconize
nslookup.exe sts.amazonaws.com
Serveur : ad.intranet.mycompany.fr
Address: 10.10.9.9
*** ad.intranet.mycompany.com dont find sts.amazonaws.com : Non-existent domain
I set the proxy and i was finally able to connect.
PROXY=http://proxy.net:10684
echo "SET PROXY : " $PROXY
export http_proxy=$PROXY
export HTTP_PROXY=$PROXY
export https_proxy=$PROXY
export HTTPS_PROXY=$PROXY
npm config set proxy $PROXY
npm config set https-proxy $PROXY
yarn config set proxy $PROXY
yarn config set https-proxy $PROXY
I am trying to provide initial configuration and software installation to a newly created AWS EC2 instance by using Ansible. If I run my playbooks independently it works just as I want. However, if I try to automate it into a single playbook by using two imports, it doesn't work (probably because the dynamic inventory can't get the newly created IP address?)...
Running together:
[WARNING]: Could not match supplied host pattern, ignoring:
aws_region_eu_central_1
PLAY [variables from dynamic inventory] ****************************************
skipping: no hosts matched
Running separately:
TASK [Gathering Facts] *********************************************************
[WARNING]: Platform linux on host XX.XX.XX.XX is using the discovered Python
interpreter at /usr/bin/python, but future installation of another Python
interpreter could change the meaning of that path. See https://docs.ansible.com
/ansible/2.10/reference_appendices/interpreter_discovery.html for more
information.
ok: [XX.XX.XX.XX]
This is my main playbook:
- import_playbook: server-setup.yml
- import_playbook: server-configuration.yml
server-setup.yml:
---
# variables from dynamic inventory
- name: variables from dynamic inventory
remote_user: ec2-user
hosts: localhost
roles:
- ec2-instance
server-configuration.yml:
---
# variables from dynamic inventory
- name: variables from dynamic inventory
remote_user: ec2-user
become: true
become_method: sudo
become_user: root
ignore_unreachable: true
hosts: aws_region_eu_central_1
gather_facts: false
pre_tasks:
- pause:
minutes: 5
roles:
- { role: epel, sudo: true }
- { role: nodejs, sudo: true }
This is my ansible.cfg file:
[defaults]
inventory = test_aws_ec2.yaml
private_key_file = master-key.pem
enable_plugins = aws_ec2
host_key_checking = False
pipelining = True
log_path = ansible.log
roles_path = /roles
forks = 1000
and finally my hosts.ini:
[local]
localhost ansible_python_interpreter=/usr/local/bin/python3
when I create amazon ubuntu instance from amazon web console and tries to log in to that instance using ssh from any remote computer I am able to log in but when I create ec2 instance using ansible aws.yml file and tries to do the same, I am unable to connect and got an error Permission denied (publickey) from every remote host except from that host in which I ran ansible script. Am I doing something wrong in my ansible file
Here is my ansiblle yml file
auth: {
auth_url: "",
# This should be your AWS Access Key ID
username: "AKIAJY32VWHYOFOR4J7Q",
# This should be your AWS Secret Access Key
# can be passed as part of cmd line when running the playbook
password: "{{ password | default(lookup('env', 'AWS_SECRET_KEY')) }}"
}
# These variable defines AWS cloud provision attributes
cluster: {
region_name: "us-east-1", #TODO Dynamic fetch
availability_zone: "", #TODO Dynamic fetch based on region
security_group: "Fabric",
target_os: "ubuntu",
image_name: "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*",
image_id: "ami-d15a75c7",
flavor_name: "t2.medium", # "m2.medium" is big enough for Fabric
ssh_user: "ubuntu",
validate_certs: True,
private_net_name: "demonet",
public_key_file: "/home/ubuntu/.ssh/fd.pub",
private_key_file: "/home/ubuntu/.ssh/fd",
ssh_key_name: "fabric",
# This variable indicate what IP should be used, only valid values are
# private_ip or public_ip
node_ip: "public_ip",
container_network: {
Network: "172.16.0.0/16",
SubnetLen: 24,
SubnetMin: "172.16.0.0",
SubnetMax: "172.16.255.0",
Backend: {
Type: "udp",
Port: 8285
}
},
service_ip_range: "172.15.0.0/24",
dns_service_ip: "172.15.0.4",
# the section defines preallocated IP addresses for each node, if there is no
# preallocated IPs, leave it blank
node_ips: [ ],
# fabric network node names expect to be using a clear pattern, this defines
# the prefix for the node names.
name_prefix: "fabric",
domain: "fabricnet",
# stack_size determines how many virtual or physical machines we will have
# each machine will be named ${name_prefix}001 to ${name_prefix}${stack_size}
stack_size: 3,
etcdnodes: ["fabric001", "fabric002", "fabric003"],
builders: ["fabric001"],
flannel_repo: "https://github.com/coreos/flannel/releases/download/v0.7.1/flannel-v0.7.1-linux-amd64.tar.gz",
etcd_repo: "https://github.com/coreos/etcd/releases/download/v3.2.0/etcd-v3.2.0-linux-amd64.tar.gz",
k8s_repo: "https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/",
go_ver: "1.8.3",
# If volume want to be used, specify a size in GB, make volume size 0 if wish
# not to use volume from your cloud
volume_size: 8,
# cloud block device name presented on virtual machines.
block_device_name: "/dev/vdb"
}
For Login:
For login using ssh I am doing these steps.
1- Download private key file.
2- chmod 600 private key.
3-ssh -vvv -i ~/.ssh/sshkeys.pem ubuntu#ec.compute-1.amazonaws.com .
I am getting error Permission denied (publickey)
You should be using the key that you created for connecting to AWS instance.
Got to EC2 dashboard and find instances and click on connect on the running instance that you need to ssh to.
It would be something like
ssh -i "XXX.pem" ubuntu#ec2-X-XXX-XX-XX.XX-XXX-2.compute.amazonaws.com
Save XXX.pem from security group to your machine.
Not the ssh keygen of your system
I am attempting to spin up a spot instance via terraform. When I try to use a provisioner block (either "remote-exec" or "file"), it fails and I see an SSH error in DEBUG level output. When I switch from a spot instance request to a standard aws instance resource declaration, the provisioning works fine.
Code not working:
resource "aws_spot_instance_request" "worker01" {
ami = "ami-0cb95574"
spot_price = "0.02"
instance_type = "m3.medium"
vpc_security_group_ids = [ "${aws_security_group.ssh_access.id}", "${aws_security_group.tcp_internal_access.id}","${aws_security_group.splunk_access.id}","${aws_security_group.internet_access.id}" ]
subnet_id = "..."
associate_public_ip_address = true
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("${var.private_key_path}")}"
}
provisioner "remote-exec" {
inline = [
"touch foo",
]
}
}
Error:
aws_spot_instance_request.worker01 (remote-exec): Connecting to remote host via SSH...
aws_spot_instance_request.worker01 (remote-exec): Host:
aws_spot_instance_request.worker01 (remote-exec): User: ec2-user
2017/09/01 16:17:52 [DEBUG] plugin: terraform: remote-exec-provisioner (internal) 2017/09/01 16:17:52 handshaking with SSH
aws_spot_instance_request.worker01 (remote-exec): Password: false
aws_spot_instance_request.worker01 (remote-exec): Private key: true
aws_spot_instance_request.worker01 (remote-exec): SSH Agent: true
2017/09/01 16:17:52 [DEBUG] plugin: terraform: remote-exec-provisioner (internal) 2017/09/01 16:17:52 handshake error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
2017/09/01 16:17:52 [DEBUG] plugin: terraform: remote-exec-provisioner (internal) 2017/09/01 16:17:52 Retryable error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Working code:
resource "aws_instance" "worker01" {
ami = "ami-0cb95574"
instance_type = "m3.medium"
vpc_security_group_ids = [ "${aws_security_group.ssh_access.id}", "${aws_security_group.tcp_internal_access.id}","${aws_security_group.splunk_access.id}","${aws_security_group.internet_access.id}" ]
subnet_id = "..."
associate_public_ip_address = true
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("${var.private_key_path}")}"
}
provisioner "remote-exec" {
inline = [
"touch foo",
]
}
}
I have tried a few different iterations of the non-working code (including an silly attempt to hard-code a public ip for a spot instance and an attempted self-reference to the spot instances public ip - which gave an no such attribute error). Unfortunately, I could not find anyone with similar issues via google. From what I have read, I should be able to provision a spot instance in this manner.
Thanks for any help you can provide.
You need to add wait_for_fulfillment = true to your spot instance request or the resource will return before the instance is created.
I am on Windows 10, and using vagrant-aws (https://github.com/mitchellh/vagrant-aws) to vagrant up and amazon instance, and getting the following error. I have listed my Vagrant file as well.
Also, some people reported this might be caused by the time. I have synced system time on windows 10, but still no luck!
$ vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
C:/Users/jacky/.vagrant.d/gems/gems/vagrant-aws-0.7.0/lib/vagrant-aws/action/run_instance.rb:98: warning: duplicated key at line 100 ignored: :associate_public_ip
==> default: Warning! The AWS provider doesn't support any of the Vagrant
==> default: high-level network configurations (`config.vm.network`). They
==> default: will be silently ignored.
==> default: Launching an instance with the following settings...
==> default: -- Type: m3.medium
==> default: -- AMI: ami-42116522
==> default: -- Region: us-west-1
==> default: -- Keypair: 2016_05_14_keypair
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Source Destination check:
==> default: -- Assigning a public IP address in a VPC: false
==> default: -- VPC tenancy specification: default
There was an error talking to AWS. The error message is shown
below:
AuthFailure => AWS was not able to validate the provided access credentials
Vagrang file:
Vagrant.configure("2") do |config|
config.vm.box = "dummy"
config.vm.provider :aws do |aws, override|
aws.access_key_id = "..."
aws.secret_access_key = "..."
aws.session_token = "..."
aws.keypair_name = "2016_05_14_keypair"
aws.ami = "ami-42116522"
aws.region = "us-west-1"
#aws.instance_type = "t2.small"
override.ssh.username = "ubuntu"
override.ssh.private_key_path = "C:/2016_05_14_keypair.pem"
end
end
I know this may be a bit late for you. I've had the same issue as you with my Vagrant file identical to your's and I've resolved it by removing the "aws.session_token = " line.
Mine was a simpler solution. I Capitalized "US" in the region name - it was case sensitive - doh !