when I create amazon ubuntu instance from amazon web console and tries to log in to that instance using ssh from any remote computer I am able to log in but when I create ec2 instance using ansible aws.yml file and tries to do the same, I am unable to connect and got an error Permission denied (publickey) from every remote host except from that host in which I ran ansible script. Am I doing something wrong in my ansible file
Here is my ansiblle yml file
auth: {
auth_url: "",
# This should be your AWS Access Key ID
username: "AKIAJY32VWHYOFOR4J7Q",
# This should be your AWS Secret Access Key
# can be passed as part of cmd line when running the playbook
password: "{{ password | default(lookup('env', 'AWS_SECRET_KEY')) }}"
}
# These variable defines AWS cloud provision attributes
cluster: {
region_name: "us-east-1", #TODO Dynamic fetch
availability_zone: "", #TODO Dynamic fetch based on region
security_group: "Fabric",
target_os: "ubuntu",
image_name: "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*",
image_id: "ami-d15a75c7",
flavor_name: "t2.medium", # "m2.medium" is big enough for Fabric
ssh_user: "ubuntu",
validate_certs: True,
private_net_name: "demonet",
public_key_file: "/home/ubuntu/.ssh/fd.pub",
private_key_file: "/home/ubuntu/.ssh/fd",
ssh_key_name: "fabric",
# This variable indicate what IP should be used, only valid values are
# private_ip or public_ip
node_ip: "public_ip",
container_network: {
Network: "172.16.0.0/16",
SubnetLen: 24,
SubnetMin: "172.16.0.0",
SubnetMax: "172.16.255.0",
Backend: {
Type: "udp",
Port: 8285
}
},
service_ip_range: "172.15.0.0/24",
dns_service_ip: "172.15.0.4",
# the section defines preallocated IP addresses for each node, if there is no
# preallocated IPs, leave it blank
node_ips: [ ],
# fabric network node names expect to be using a clear pattern, this defines
# the prefix for the node names.
name_prefix: "fabric",
domain: "fabricnet",
# stack_size determines how many virtual or physical machines we will have
# each machine will be named ${name_prefix}001 to ${name_prefix}${stack_size}
stack_size: 3,
etcdnodes: ["fabric001", "fabric002", "fabric003"],
builders: ["fabric001"],
flannel_repo: "https://github.com/coreos/flannel/releases/download/v0.7.1/flannel-v0.7.1-linux-amd64.tar.gz",
etcd_repo: "https://github.com/coreos/etcd/releases/download/v3.2.0/etcd-v3.2.0-linux-amd64.tar.gz",
k8s_repo: "https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/",
go_ver: "1.8.3",
# If volume want to be used, specify a size in GB, make volume size 0 if wish
# not to use volume from your cloud
volume_size: 8,
# cloud block device name presented on virtual machines.
block_device_name: "/dev/vdb"
}
For Login:
For login using ssh I am doing these steps.
1- Download private key file.
2- chmod 600 private key.
3-ssh -vvv -i ~/.ssh/sshkeys.pem ubuntu#ec.compute-1.amazonaws.com .
I am getting error Permission denied (publickey)
You should be using the key that you created for connecting to AWS instance.
Got to EC2 dashboard and find instances and click on connect on the running instance that you need to ssh to.
It would be something like
ssh -i "XXX.pem" ubuntu#ec2-X-XXX-XX-XX.XX-XXX-2.compute.amazonaws.com
Save XXX.pem from security group to your machine.
Not the ssh keygen of your system
Related
In AWS, to gain access to our RDS instance we setup a dedicated EC2 bastion host that we securely access by invoking the SSM Agent in the EC2 dashboard.
This is done by writing a shell script after connecting to the bastion host, now the script usually disappears after a certain time(?). So, is there any way to create this file using CDK when I create the bastion host?
I tried using CFN.init but to no avail.
this.bastionHost = new BastionHostLinux(this, "BastionHost", {
vpc: inspireStack.vpc,
subnetSelection: { subnetType: SubnetType.PRIVATE_WITH_NAT },
instanceType: InstanceType.of(InstanceClass.T2, InstanceSize.MICRO),
init: CloudFormationInit.fromConfigSets({
configSets: {
default: ["install"],
},
configs: {
install: new InitConfig([
InitCommand.shellCommand("cd ~"),
InitFile.fromString("jomar.sh", "testing 123"),
InitCommand.shellCommand("chmod +x jomar.sh"),
]),
},
})
You can write files to an EC2 instance with cloud-init. Either from an existing file or directly from the TS (a json for instance)
const ec2Instance = new ec2.Instance(this, 'Instance', {
vpc,
instanceType: ec2.InstanceType.of(
ec2.InstanceClass.T4G,
ec2.InstanceSize.MICRO,
),
machineImage: new ec2.AmazonLinuxImage({
generation: ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
cpuType: ec2.AmazonLinuxCpuType.ARM_64,
}),
init: ec2.CloudFormationInit.fromConfigSets({
configSets: {
default: ['install', 'config'],
},
configs: {
install: new ec2.InitConfig([
ec2.InitFile.fromObject('/etc/config.json', {
IP: ec2Eip.ref,
}),
ec2.InitFile.fromFileInline(
'/etc/install.sh',
'./src/asteriskConfig/install.sh',
),
ec2.InitCommand.shellCommand('chmod +x /etc/install.sh'),
ec2.InitCommand.shellCommand('cd /tmp'),
ec2.InitCommand.shellCommand('/etc/install.sh'),
]),
config: new ec2.InitConfig([
ec2.InitFile.fromFileInline(
'/etc/asterisk/pjsip.conf',
'./src/asteriskConfig/pjsip.conf',
),
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.CloudFormationInit.html
I see there are three simple workarounds:
SSM start session contains 'profile' section, where you can add your script as a bash function.
You can create an SSM document that will create this file, so before starting the session you will only need to run this document to create a file...
Save this script on S3 and just download them
Regarding disappearing file - it's strange... This CDK construct is similar to Instance, try to use it instead, and create your script with user-data.
I am using the official helm chart of Jenkins.
I have enabled backup and also provided backup credentials
Here is the relevant config in values.yaml
## Backup cronjob configuration
## Ref: https://github.com/maorfr/kube-tasks
backup:
# Backup must use RBAC
# So by enabling backup you are enabling RBAC specific for backup
enabled: true
# Used for label app.kubernetes.io/component
componentName: "jenkins-backup"
# Schedule to run jobs. Must be in cron time format
# Ref: https://crontab.guru/
schedule: "0 2 * * *"
labels: {}
annotations: {}
# Example for authorization to AWS S3 using kube2iam
# Can also be done using environment variables
# iam.amazonaws.com/role: "jenkins"
image:
repository: "maorfr/kube-tasks"
tag: "0.2.0"
# Additional arguments for kube-tasks
# Ref: https://github.com/maorfr/kube-tasks#simple-backup
extraArgs: []
# Add existingSecret for AWS credentials
existingSecret: {}
# gcpcredentials: "credentials.json"
## Example for using an existing secret
# jenkinsaws:
## Use this key for AWS access key ID
awsaccesskey: "AAAAJJJJDDDDDDJJJJJ"
## Use this key for AWS secret access key
awssecretkey: "frkmfrkmrlkmfrkmflkmlm"
# Add additional environment variables
# jenkinsgcp:
## Use this key for GCP credentials
env: []
# Example environment variable required for AWS credentials chain
# - name: "AWS_REGION"
# value: "us-east-1"
resources:
requests:
memory: 1Gi
cpu: 1
limits:
memory: 1Gi
cpu: 1
# Destination to store the backup artifacts
# Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage, Google Cloud Storage
# Additional support can added. Visit this repository for details
# Ref: https://github.com/maorfr/skbn
destination: "s3://jenkins-data/backup"
However the backup job fails as follows:
2020/01/22 20:19:23 Backup started!
2020/01/22 20:19:23 Getting clients
2020/01/22 20:19:26 NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
What is missing?
you must create secret which looks like this:
kubectl create secret generic jenkinsaws --from-literal=jenkins_aws_access_key=ACCESS_KEY --from-literal=jenkins_aws_secret_key=SECRET_KEY
then consume it like this:
existingSecret:
jenkinsaws:
awsaccesskey: jenkins_aws_access_key
awssecretkey: jenkins_aws_secret_key
where jenkins_aws_access_key/jenkins_aws_secret_key it's key of the secret
backup:
enabled: true
destination: "s3://jenkins-pumbala/backup"
schedule: "15 1 * * *"
env:
- name: "AWS_ACCESS_KEY_ID"
value: "AKIDFFERWT***D36G"
- name: "AWS_SECRET_ACCESS_KEY"
value: "5zGdfgdfgdf***************Isi"
I have used Ansible to create 1 AWS EC2 instance using the examples in the Ansible ec2 documentation. I can successfully create the instance with a tag. Then I temporarily add it to my local inventory group using add_host.
After doing this, I am having trouble when I try to configure the newly created instance. In my Ansible play, I would like to specify the instance by its tag name. eg. hosts: <tag_name_here>, but I am getting an error.
Here is what I have done so far:
My directory layout is
inventory/
staging/
hosts
group_vars/
all/
all.yml
site.yml
My inventory/staging/hosts file is
[local]
localhost ansible_connection=local ansible_python_interpreter=/home/localuser/ansible_ec2/.venv/bin/python
My inventory/staging/group_vars/all/all.yml file is
---
ami_image: xxxxx
subnet_id: xxxx
region: xxxxx
launched_tag: tag_Name_NginxDemo
Here is my Ansible playbook site.yml
- name: Launch instance
hosts: localhost
gather_facts: no
tasks:
- ec2:
key_name: key-nginx
group: web_sg
instance_type: t2.micro
image: "{{ ami_image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ subnet_id }}"
assign_public_ip: yes
instance_tags:
Name: NginxDemo
exact_count: 1
count_tag:
Name: NginxDemo
exact_count: 1
register: ec2
- name: Add EC2 instance to inventory group
add_host:
hostname: "{{ item.public_ip }}"
groupname: tag_Name_NginxDemo
ansible_user: centos_user
ansible_become: yes
with_items: "{{ ec2.instances }}"
- name: Configure EC2 instance in launched group
hosts: tag_Name_NginxDemo
become: True
gather_facts: no
tasks:
- ping:
I run this playbook with
$ cd /home/localuser/ansible_ec2
$ source .venv/bin/activate
$ ansible-playbook -i inventory/staging site.yml -vvv`
and this creates the EC2 instance - the 1st play works correctly. However, the 2nd play gives the following error
TASK [.....] ******************************************************************
The authenticity of host 'xx.xxx.xxx.xx (xx.xxx.xxx.xx)' can't be established.
ECDSA key fingerprint is XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.
Are you sure you want to continue connecting (yes/no)? yes
fatal: [xx.xxx.xxx.xx]: FAILED! => {"changed": false, "module_stderr":
"Shared connection to xx.xxx.xxx.xx closed.\r\n", "module_stdout": "/bin/sh:
1: /usr/bin/python: not found\r\n", "msg": "MODULE FAILURE", "rc": 127}
I followed the instructions from
this SO question to create the task with add_hosts
here to set gather_facts: False, but this still does not allow the play to run correctly.
How can I target the EC2 host using the tag name?
EDIT:
Additional info
This is the only playbook I have run to this point. I see this message requires Python but I cannot install Python on the instance as I cannot connect to it in my play Configure EC2 instance in launched group...if I could make that connection, then I could install Python (if this is the problem). Though, I'm not sure how to connect to the instance.
EDIT 2:
Here is my Python info on the localhost where I am running Ansible
I am running Ansible inside a Python venv.
Here is my python inside the venv
$ python --version
Python 2.7.15rc1
$ which python
~/ansible_ec2/.venv/bin/python
Here are my details about Ansible that I installed inside the Python venv
ansible 2.6.2
config file = /home/localuser/ansible_ec2/ansible.cfg
configured module search path = [u'/home/localuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/localuser/ansible_ec2/.venv/local/lib/python2.7/site-packages/ansible
executable location = /home/localuser/ansible_ec2/.venv/bin/ansible
python version = 2.7.15rc1 (default, xxxx, xxxx) [GCC 7.3.0]
Ok, so after a lot of searching, I found 1 possible workaround here. Basically, this workaround uses the lineinfile module and adds the new EC2 instance details to the hosts file permanently....not just for the in-memory plays following the add_host task. I followed this suggestion very closely and this approach worked for me. I did not need to use the add_host module.
EDIT:
The line I added in the lineinfile module was
- name: Add EC2 instance to inventory group
- lineinfile: line="{{ item.public_ip }} ansible_python_interpreter=/usr/bin/python3" insertafter=EOF dest=./inventory/staging/hosts
with_items: "{{ ec2.instances }}"
How to add custom logs to CloudWatch? Defaults logs are sent but how to add a custom one?
I already added a file like this: (in .ebextensions)
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/applogs.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/app/current/logs/*
"/opt/elasticbeanstalk/tasks/taillogs.d/cloud-init.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/app/current/logs/*
As I did bundlelogs.d and taillogs.d these custom logs are now tailed or retrieved from the console or web, that's nice but they don't persist and are not sent on CloudWatch.
In CloudWatch I have the defaults logs like
/aws/elasticbeanstalk/InstanceName/var/log/eb-activity.log
And I want to have another one like this
/aws/elasticbeanstalk/InstanceName/var/app/current/logs/mycustomlog.log
Both bundlelogs.d and taillogs.d are logs retrieved from management console. What you want to do is extend default logs (e.g. eb-activity.log) to CloudWatch Logs. In order to extend the log stream, you need to add another configuration under /etc/awslogs/config/. The configuration should follow the Agent Configuration file Format.
I've successfully extended my logs for my custom ubuntu/nginx/php platform. Here is my extension file FYI. Here is an official sample FYI.
In your case, it could be like
files:
"/etc/awslogs/config/my_app_log.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/app/current/logs/xxx.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/app/current/logs/xxx.log"]]}`
log_stream_name = {instance_id}
file = /var/app/current/logs/xxx.log*
Credits where due go to Sebastian Hsu and Abhyudit Jain.
This is the final config file I came up with for .ebextensions for our particular use case. Notes explaining some aspects are below the code block.
files:
"/etc/awslogs/config/beanstalklogs_custom.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/log/tomcat8/catalina.out]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Fn::Select" : [ "1", { "Fn::Split" : [ "-", { "Ref":"AWSEBEnvironmentName" } ] } ] }, "var/log/tomcat8/catalina.out"]]}`
log_stream_name = `{"Fn::Join":["--", [{ "Ref":"AWSEBEnvironmentName" }, "{instance_id}"]]}`
file = /var/log/tomcat8/catalina.out*
services:
sysvinit:
awslogs:
files:
- "/etc/awslogs/config/beanstalklogs_custom.conf"
commands:
rm_beanstalklogs_custom_bak:
command: "rm beanstalklogs_custom.conf.bak"
cwd: "/etc/awslogs/config"
ignoreErrors: true
log_group_name
We have a standard naming scheme for our EB environments which is exactly environmentName-environmentType. I'm using { "Fn::Split" : [ "-", { "Ref":"AWSEBEnvironmentName" } ] } to split that into an array of two strings (name and type).
Then I use { "Fn::Select" : [ "1", <<SPLIT_OUTPUT>> ] } to get just the type string. Your needs would obviously differ, so you may only need the following:
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat8/catalina.out"]]}`
log_stream_name
I'm using the Fn::Join function to join the EB environment name with the instance ID. Note that the instance ID template is a string that gets echoed exactly as given.
services
The awslogs service is restarted automatically when the custom conf file is deployed.
commands
When the files block overwrites an existing file, it creates a backup file, like beanstalklogs_custom.conf.bak. This block erases that backup file because awslogs service reads both files, potentially causing conflict.
Result
If you log in to an EC2 instance and sudo cat the file, you should see something like this. Note that all the Fn functions have resolved. If you find that an Fn function didn't resolve, check it for syntax errors.
[/var/log/tomcat8/catalina.out]
log_group_name = /aws/elasticbeanstalk/environmentType/var/log/tomcat8/catalina.out
log_stream_name = environmentName-environmentType--{instance_id}
file = /var/log/tomcat8/catalina.out*
The awslogs agent looks in the configuration file for the log files which it's supposed to send. There are some defaults in it. You need to edit it and specify the files.
You can check and edit the configuration file located at:
/etc/awslogs/awslogs.conf
Make sure to restart the service:
sudo service awslogs restart
You can specify your own files there and create different groups and what not.
Please refer to the following link and you'll be able to get your logs in no time.
Resources:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html
Edit:
As you don't want to edit the files on the instance, you can add the relevant code to the .ebextensions folder in the root of your code. For example, this is my 01_cloudwatch.config :
packages:
yum:
awslogs: []
container_commands:
01_get_awscli_conf_file:
command: "aws s3 cp s3://project/awscli.conf /etc/awslogs/awscli.conf"
02_get_awslogs_conf_file:
command: "aws s3 cp s3://project/awslogs.conf.${NODE_ENV} /etc/awslogs/awslogs.conf"
03_restart_awslogs:
command: "sudo service awslogs restart"
04_start_awslogs_at_system_boot:
command: "sudo chkconfig awslogs on"
In this config, I am fetching the appropriate config file from a S3 bucket depending on the NODE_ENV. You can do anything you want in your config.
Some great answers already here.
I've detailed in a new Medium blog how this all works and an example .ebextensions file and where to put it.
Below is an excerpt that you might be able to use, the article explains how to determine the right folder/file(s) to stream.
Note that if /var/app/current/logs/* contains many different files this may not work,e.g. if you have
database.log
app.log
random.log
Then you should consider adding a stream for each, however if you have
app.2021-10-18.log
app.2021-10-17.log
app.2021-10-16.log
Then you can use /var/app/current/logs/app.*
packages:
yum:
awslogs: []
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 90
files:
"/etc/awslogs/awscli.conf" :
mode: "000600"
owner: root
group: root
content: |
[plugins]
cwlogs = cwlogs
[default]
region = `{"Ref":"AWS::Region"}`
"/etc/awslogs/config/logs.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/app/current/logs]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/app/current/logs"]]}`
log_stream_name = {instance_id}
file = /var/app/current/logs/*
commands:
"01":
command: systemctl enable awslogsd.service
"02":
command: systemctl restart awslogsd
Looking at the AWS docs it's not immediately apparent, but there are a few things you need to do.
(Our environment is an Amazon Linux AMI - Rails App on the Ruby 2.6 Puma Platform).
First, create a Policy in IAM to give your EB generated EC2 instances access to work with CloudWatch log groups and stream to them - we named ours "EB-Cloudwatch-LogStream-Access".
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:CreateLogGroup",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:log-group:/aws/elasticbeanstalk/*:log-stream:*"
}
]
}
Once you have created this, make sure the policy is attached (in IAM > Roles) to your IAM Instance Profile and Service Role that are associated with your EB environment (check the environment's configuration page: Configuration > Security > IAM instance profile | Service Role).
Then, provide a .config file in your .ebextensions directory such as setup_stream_to_cloudwatch.config or 0x_setup_stream_to_cloudwatch.config. In our project we have made it the last extension .config file to run during our deploys by setting a high number for 0x (eg. 09_setup_stream_to_cloudwatch.config).
Then, provide the following, replacing your_log_file with the appropriate filename, keeping in mind that some log files live in /var/log on an Amazon Linux AMI and some (such as those generated by your application) may live in a path such as /var/app/current/log:
files:
'/etc/awslogs/config/logs.conf':
mode: '000600'
owner: root
group: root
content: |
[/var/app/current/log/your_log_file.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/app/current/log/your_log_file.log"]]}`
log_stream_name = {instance_id}
file = /var/app/current/log/your_log_file.log*
commands:
"01":
command: chkconfig awslogs on
"02":
command: service awslogs restart # note that this works for Amazon Linux AMI only - other Linux instances likely use `systemd`
Deploy your application, and you should be set!
I intended to using Vagrant,Chef-solo to establish a AWS environment.But I got some errors that I can not solve.Anybody can help me?
The steps I used:
Install all necessary environment on Mac OS X: such as vagrant, vagrant plugin, virtual box, chef, chef plugin and so on.
Download vagrant configuration files:
git clone https://github.com/ICTatRTI/ict-chef-repo
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
#config.vm.box_url = "https://opscode-vm.s3.amazonaws.com/vagrant/opscode_ubuntu-12.04_chef-11.2.0.box"
#config.vm.box = "opscode-ubuntu-1204"
config.vm.box = "dummy"
config.vm.network :forwarded_port, guest: 80, host: 8888
config.vm.network :forwarded_port, guest: 3306, host: 3333
config.ssh.username = "ubuntu"
config.vm.provider :aws do |aws, override|
#config.vm.provider :aws do |aws|
aws.access_key_id = 'XXXXXXXXXXXXXXXQ'
aws.secret_access_key = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
aws.keypair_name = "usr-aws-2013"
aws.availability_zone = "us-west-2c"
aws.instance_type = "t1.micro"
aws.region = "us-west-2"
aws.ami = "ami-0849a03f"
aws.security_groups = ['quicklaunch-1']
aws.tags = {
'Name' => 'tong',
'Description' => 'vagrant test'
}
override.ssh.private_key_path = "~/.ssh/usr-aws-2013.pem"
override.ssh.username = "ubuntu"
end
config.vm.provision :chef_solo do |chef|
chef.node_name = 'base'
chef.cookbooks_path = "./cookbooks"
chef.roles_path = "./roles"
chef.add_role "base"
chef.add_role "ushahidi"
end
end
Run:
vagrant up --provider=aws
Got the following errors
Bringing machine 'default' up with 'aws' provider...
WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1
[default] Warning! The AWS provider doesn't support any of the Vagrant
high-level network configurations (`config.vm.network`). They
will be silently ignored.
[default] Launching an instance with the following settings...
[default] -- Type: t1.micro
[default] -- AMI: ami-0849a03f
[default] -- Region: us-west-2
[default] -- Availability Zone: us-west-2c
[default] -- Keypair: usr-aws-2013
[default] -- Security Groups: ["quicklaunch-1"]
[default] -- Block Device Mapping: []
[default] -- Terminate On Shutdown: false
An error occurred while executing multiple actions in parallel.
Any errors that occurred are shown below.
An unexpected error ocurred when executing the action on the
'default' machine. Please report this as a bug:
The image id '[ami-0849a03f]' does not exist
Instance and AMI are different things and they have different numbers too. So if you have i-bddcf889 you cannot reference it in your Vagrantfile as ami-bddcf889.
Instead you don't have to create/start instance manually - you must provide ami from which Vagrant will create instance itself. For example take the one you made instance manually from.