vagrant-aws: AWS was not able to validate the provided access credentials - amazon-web-services

I am on Windows 10, and using vagrant-aws (https://github.com/mitchellh/vagrant-aws) to vagrant up and amazon instance, and getting the following error. I have listed my Vagrant file as well.
Also, some people reported this might be caused by the time. I have synced system time on windows 10, but still no luck!
$ vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
C:/Users/jacky/.vagrant.d/gems/gems/vagrant-aws-0.7.0/lib/vagrant-aws/action/run_instance.rb:98: warning: duplicated key at line 100 ignored: :associate_public_ip
==> default: Warning! The AWS provider doesn't support any of the Vagrant
==> default: high-level network configurations (`config.vm.network`). They
==> default: will be silently ignored.
==> default: Launching an instance with the following settings...
==> default: -- Type: m3.medium
==> default: -- AMI: ami-42116522
==> default: -- Region: us-west-1
==> default: -- Keypair: 2016_05_14_keypair
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Source Destination check:
==> default: -- Assigning a public IP address in a VPC: false
==> default: -- VPC tenancy specification: default
There was an error talking to AWS. The error message is shown
below:
AuthFailure => AWS was not able to validate the provided access credentials
Vagrang file:
Vagrant.configure("2") do |config|
config.vm.box = "dummy"
config.vm.provider :aws do |aws, override|
aws.access_key_id = "..."
aws.secret_access_key = "..."
aws.session_token = "..."
aws.keypair_name = "2016_05_14_keypair"
aws.ami = "ami-42116522"
aws.region = "us-west-1"
#aws.instance_type = "t2.small"
override.ssh.username = "ubuntu"
override.ssh.private_key_path = "C:/2016_05_14_keypair.pem"
end
end

I know this may be a bit late for you. I've had the same issue as you with my Vagrant file identical to your's and I've resolved it by removing the "aws.session_token = " line.

Mine was a simpler solution. I Capitalized "US" in the region name - it was case sensitive - doh !

Related

How to correctly use dynamic inventories with Ansible?

I am trying to provide initial configuration and software installation to a newly created AWS EC2 instance by using Ansible. If I run my playbooks independently it works just as I want. However, if I try to automate it into a single playbook by using two imports, it doesn't work (probably because the dynamic inventory can't get the newly created IP address?)...
Running together:
[WARNING]: Could not match supplied host pattern, ignoring:
aws_region_eu_central_1
PLAY [variables from dynamic inventory] ****************************************
skipping: no hosts matched
Running separately:
TASK [Gathering Facts] *********************************************************
[WARNING]: Platform linux on host XX.XX.XX.XX is using the discovered Python
interpreter at /usr/bin/python, but future installation of another Python
interpreter could change the meaning of that path. See https://docs.ansible.com
/ansible/2.10/reference_appendices/interpreter_discovery.html for more
information.
ok: [XX.XX.XX.XX]
This is my main playbook:
- import_playbook: server-setup.yml
- import_playbook: server-configuration.yml
server-setup.yml:
---
# variables from dynamic inventory
- name: variables from dynamic inventory
remote_user: ec2-user
hosts: localhost
roles:
- ec2-instance
server-configuration.yml:
---
# variables from dynamic inventory
- name: variables from dynamic inventory
remote_user: ec2-user
become: true
become_method: sudo
become_user: root
ignore_unreachable: true
hosts: aws_region_eu_central_1
gather_facts: false
pre_tasks:
- pause:
minutes: 5
roles:
- { role: epel, sudo: true }
- { role: nodejs, sudo: true }
This is my ansible.cfg file:
[defaults]
inventory = test_aws_ec2.yaml
private_key_file = master-key.pem
enable_plugins = aws_ec2
host_key_checking = False
pipelining = True
log_path = ansible.log
roles_path = /roles
forks = 1000
and finally my hosts.ini:
[local]
localhost ansible_python_interpreter=/usr/local/bin/python3

Vault UI is not opening in browser

My vault is deployed in EKS cluster with S3 backend but whenever I try to access UI in the browser it gives 404.
This is my values.yaml file
server:
affinity: null
dataStorage:
enabled: false
dev:
enabled: false
standalone:
enabled: false
ha:
enabled: true
replicas: 1
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "s3" {
access_key = "key"
secret_key = "key"
bucket = "name"
region = "region"
}
ui:
enabled: true
serviceType: "LoadBalancer"
HTTP status Code 404 means that 404 is returned if the specified resource such as a vault, upload ID, or job ID does not exist.
Make sure that you have properly handled Vault - vault-404.
Take a look: 404-http-code-amazon.
Don't you have extraSecretEnvironmentVars ? Please fill proper field of file with proper values.
Add following block of code under server standalone section:
service:
enabled: true
Look at: standalone-service-vault.
Your values.json file should be in YAML format. What is more important is that it's not even valid YAML, as it has a block of JSON plonked in the middle of it .
Look at the proper example here: vault-eks.
It is solved I just have to put external Port for service
enabled: true
serviceType: "LoadBalancer"
externalPort: 8200

Vagrant AWS error when specifying a VPC

I'm using Vagrant & ChefSolo to setup a server in AWS. When setting up a classic instance it works fine, however when I attempt to specify a VPC (through the subnet Id) and specify an ec2 instance type that is only available within a VPC the vagrant up call results in the error below. Does anyone know how to get Vagrant to work with VPCs?
==> default: Launching an instance with the following settings...
==> default: -- Type: t2.micro
==> default: -- AMI: ami-96401ce1
==> default: -- Region: eu-west-1
==> default: -- Keypair: vagrant-key-pair-eu-west-1
==> default: -- IAM Instance Profile Name: DEV-config-ipython
==> default: -- Security Groups: ["ssh-only-from-anywhere", "http-from-me"]
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Assigning a public IP address in a VPC: false
There was an error talking to AWS. The error message is shown
below:
VPCResourceNotSpecified => The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.
My Vagrant file is as follows...
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu_aws"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
#config.vm.synced_folder "../.", "/vagrant", id: "vagrant-root"
config.omnibus.chef_version = :latest
config.vm.provider :aws do |aws, override|
aws.region = "eu-west-1"
aws.security_groups = [ 'ssh-only-from-me', 'http-from-me' ]
aws.access_key_id = ENV['AWS_ACCESS_KEY_ID']
aws.secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
aws.keypair_name = ENV['AWS_KEYPAIR_NAME']
#aws.instance_type = "m3.medium"
aws.instance_type = "t2.micro"
aws.ami = "ami-96401ce1" ## Ubuntu 14 LTS on HVM
subnet_id = "subnet-ed9cd588" # vagrantVPC publicSubnet
associate_public_ip = true
override.ssh.username = "ubuntu"
override.ssh.private_key_path = ENV['MY_PRIVATE_AWS_SSH_KEY_PATH']
aws.iam_instance_profile_name = 'DEV-config-ipython'
aws.tags = {
'Name' => 'ipython',
'env' => 'DEV',
'application' => 'test',
}
end
config.vm.provision :chef_solo do |chef|
config.berkshelf.enabled = true
chef.data_bags_path = "./data_bags"
chef.custom_config_path = "Vagrantfile.chef"
chef.json = {
'java' => {
"install_flavor" => "oracle",
"jdk_version" => "8",
"oracle" => {
"accept_oracle_download_terms" => true
}
}
}
chef.add_recipe "apt"
chef.add_recipe "build-essential"
#chef.add_recipe "serverTest"
chef.add_recipe "java"
chef.add_recipe "gson"
chef.add_recipe "log4j"
chef.add_recipe "maven"
chef.add_recipe "aws-sdk-cookbook"
chef.add_recipe "cron"
chef.add_recipe "awscli"
chef.add_recipe "maven"
end
end
should it maybe be:
aws.subnet_id = "subnet-ed9cd588" # vagrantVPC publicSubnet
aws.associate_public_ip = true

Kitchen-EC2 SSH prompting password for an instance inside VPC

I am trying to spin up an ec2 instance inside a VPC on a private subnet. Every time I run kitchen test, I am able to spin up the instance with the right security groups and in the right subnet range. When test-kitchen is trying to SSH on to the instance, it is asking for password. However, when I manually try to ssh (ssh <private_ip> -i <path_to_ssh_key> -l ubuntu) on to the machine I succeed without being prompted for a password.
The following is my .kitchen.yml file
---
driver:
name: ec2
aws_ssh_key_id: id-spanning
security_group_ids: ['sg-9....5']
region: us-east-1
availability_zone: us-east-1a
require_chef_omnibus: true
subnet_id: subnet-5...0
associate_public_ip: false
instance_type: m3.medium
interface: private
transport:
ssh_key: ~/.ssh/id-spanning.pem
connection_timeout: 10
connection_retries: 5
username: ubuntu
provisioner:
name: chef_solo
platforms:
- name: Ubuntu-14.04
driver:
image_id: ami-8821cae0
suites:
- name: default
run_list:
attributes:
I have the aws credentials in place on the environment variables. The following is my output.
kitchen test
-----> Starting Kitchen (v1.4.0)
-----> Cleaning up any prior instances of <default-Ubuntu-1404>
-----> Destroying <default-Ubuntu-1404>...
EC2 instance <i-16f468c6> destroyed.
Finished destroying <default-Ubuntu-1404> (0m1.90s).
-----> Testing <default-Ubuntu-1404>
-----> Creating <default-Ubuntu-1404>...
Creating <>...
If you are not using an account that qualifies under the AWS
free-tier, you may be charged to run these suites. The charge
should be minimal, but neither Test Kitchen nor its maintainers
are responsible for your incurred costs.
Instance <i-8fad345f> requested.
EC2 instance <i-8fad345f> created.
Waited 0/300s for instance <i-8fad345f> to become ready.
Waited 5/300s for instance <i-8fad345f> to become ready.
Waited 10/300s for instance <i-8fad345f> to become ready.
Waited 15/300s for instance <i-8fad345f> to become ready.
Waited 20/300s for instance <i-8fad345f> to become ready.
Waited 25/300s for instance <i-8fad345f> to become ready.
EC2 instance <i-8fad345f> ready.
Password:
I tried several times and haven't had any luck on bypassing the password to allow test-kitchen to ssh on to the instance. The following is my kitchen diagnose output.
---
timestamp: 2015-05-26 15:34:29 UTC
kitchen_version: 1.4.0
instances:
default-Ubuntu-1404:
platform:
os_type: unix
shell_type: bourne
state_file:
hostname: ''
server_id: i-1.....6
driver:
associate_public_ip: false
availability_zone: us-east-1a
aws_access_key_id:
aws_secret_access_key:
aws_session_token:
aws_ssh_key_id: id-spanning
block_device_mappings:
ebs_optimized: false
flavor_id:
iam_profile_name:
image_id: ami-8821cae0
instance_type: m3.medium
interface: private
kitchen_root: "/Users/jonnas2/Desktop/apache101"
log_level: :info
name: ec2
price:
private_ip_address:
region: us-east-1
retryable_sleep: 5
retryable_tries: 60
security_group_ids:
- sg-9....5
shared_credentials_profile:
subnet_id: subnet-5....0
tags:
created-by: test-kitchen
test_base_path: "/Users/jonnas2/Desktop/apache101/test/integration"
user_data:
username:
provisioner:
attributes: {}
chef_metadata_url:
chef_omnibus_install_options:
chef_omnibus_root: "/opt/chef"
chef_omnibus_url: https://www.chef.io/chef/install.sh
chef_solo_path: "/opt/chef/bin/chef-solo"
clients_path:
cookbook_files_glob: README.*,metadata {json,rb},attributes/**/*,definitions/**/*,files/**/*,libraries/**/*,providers/**/*,recipes/**/*,resources/**/*,templates/**/*
data_bags_path:
data_path:
encrypted_data_bag_secret_key_path:
environments_path:
http_proxy:
https_proxy:
kitchen_root: "/Users/jonnas2/Desktop/apache101"
log_file:
log_level: :info
name: chef_solo
nodes_path:
require_chef_omnibus: true
roles_path:
root_path: "/tmp/kitchen"
run_list: []
solo_rb: {}
sudo: true
sudo_command: sudo -E
test_base_path: "/Users/jonnas2/Desktop/apache101/test/integration"
transport:
compression: zlib
compression_level: 6
connection_retries: 5
connection_retry_sleep: 1
connection_timeout: 10
keepalive: true
keepalive_interval: 60
kitchen_root: "/Users/jonnas2/Desktop/apache101"
log_level: :info
max_wait_until_ready: 600
name: ssh
port: 22
ssh_key: "/Users/jonnas2/.ssh/id-spanning.pem"
test_base_path: "/Users/jonnas2/Desktop/apache101/test/integration"
username: ubuntu
verifier:
busser_bin: "/tmp/verifier/bin/busser"
http_proxy:
https_proxy:
kitchen_root: "/Users/jonnas2/Desktop/apache101"
log_level: :info
name: busser
root_path: "/tmp/verifier"
ruby_bindir: "/opt/chef/embedded/bin"
sudo: true
sudo_command: sudo -E
suite_name: default
test_base_path: "/Users/jonnas2/Desktop/apache101/test/integration"
version: busser
versions used:
test-kitchen 1.4.0
kitchen-ec2 0.9.0
Any help would be greatly appreciated. Thanks.
This issue was resolved by test-kitchen 1.4.1. A fix was merged (https://github.com/test-kitchen/test-kitchen/pull/704]) into core test-kitchen which disables password auth if an ssh_key is configured.

Errors occurred wher using Chef-solo+vagran+AWS+ Mac OS X

I intended to using Vagrant,Chef-solo to establish a AWS environment.But I got some errors that I can not solve.Anybody can help me?
The steps I used:
Install all necessary environment on Mac OS X: such as vagrant, vagrant plugin, virtual box, chef, chef plugin and so on.
Download vagrant configuration files:
git clone https://github.com/ICTatRTI/ict-chef-repo
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
#config.vm.box_url = "https://opscode-vm.s3.amazonaws.com/vagrant/opscode_ubuntu-12.04_chef-11.2.0.box"
#config.vm.box = "opscode-ubuntu-1204"
config.vm.box = "dummy"
config.vm.network :forwarded_port, guest: 80, host: 8888
config.vm.network :forwarded_port, guest: 3306, host: 3333
config.ssh.username = "ubuntu"
config.vm.provider :aws do |aws, override|
#config.vm.provider :aws do |aws|
aws.access_key_id = 'XXXXXXXXXXXXXXXQ'
aws.secret_access_key = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
aws.keypair_name = "usr-aws-2013"
aws.availability_zone = "us-west-2c"
aws.instance_type = "t1.micro"
aws.region = "us-west-2"
aws.ami = "ami-0849a03f"
aws.security_groups = ['quicklaunch-1']
aws.tags = {
'Name' => 'tong',
'Description' => 'vagrant test'
}
override.ssh.private_key_path = "~/.ssh/usr-aws-2013.pem"
override.ssh.username = "ubuntu"
end
config.vm.provision :chef_solo do |chef|
chef.node_name = 'base'
chef.cookbooks_path = "./cookbooks"
chef.roles_path = "./roles"
chef.add_role "base"
chef.add_role "ushahidi"
end
end
Run:
vagrant up --provider=aws
Got the following errors
Bringing machine 'default' up with 'aws' provider...
WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1
[default] Warning! The AWS provider doesn't support any of the Vagrant
high-level network configurations (`config.vm.network`). They
will be silently ignored.
[default] Launching an instance with the following settings...
[default] -- Type: t1.micro
[default] -- AMI: ami-0849a03f
[default] -- Region: us-west-2
[default] -- Availability Zone: us-west-2c
[default] -- Keypair: usr-aws-2013
[default] -- Security Groups: ["quicklaunch-1"]
[default] -- Block Device Mapping: []
[default] -- Terminate On Shutdown: false
An error occurred while executing multiple actions in parallel.
Any errors that occurred are shown below.
An unexpected error ocurred when executing the action on the
'default' machine. Please report this as a bug:
The image id '[ami-0849a03f]' does not exist
Instance and AMI are different things and they have different numbers too. So if you have i-bddcf889 you cannot reference it in your Vagrantfile as ami-bddcf889.
Instead you don't have to create/start instance manually - you must provide ami from which Vagrant will create instance itself. For example take the one you made instance manually from.