I am trying to setup an AWS AMI vagrant provision: http://www.packer.io/docs/builders/amazon-ebs.html
I am using the standard .json config:
{
"type": "amazon-instance",
"access_key": "YOUR KEY HERE",
"secret_key": "YOUR SECRET KEY HERE",
"region": "us-east-1",
"source_ami": "ami-d9d6a6b0",
"instance_type": "m1.small",
"ssh_username": "ubuntu",
"account_id": "0123-4567-0890",
"s3_bucket": "packer-images",
"x509_cert_path": "x509.cert",
"x509_key_path": "x509.key",
"x509_upload_path": "/tmp",
"ami_name": "packer-quick-start {{timestamp}}"
}
It connects fine, and I see it create the instance in my AWS account. However, I keep getting Timeout waiting for SSH as an error. What could be causing this problem and how can I resolve it?
As I mentioned in my comment above this is just because sometimes it takes more than a minute for an instance to launch and be SSH ready.
If you want you could set the timeout to be longer - the default timeout with packer is 1 minute.
So you could set it to 5 minutes by adding the following to your json config:
"ssh_timeout": "5m"
Related
I am trying to get AWS EC2 instance details using RunInstancesRequest. For that I followed AWS doc https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-ec2-instances.html.
RunInstancesRequest runInstancesRequest = new RunInstancesRequest();
runInstancesRequest.withImageId(imageId).withInstanceType(instanceType).withMinCount(1).withMaxCount(count).withSecurityGroups(securityGroupName);
RunInstancesResult runInstancesResult = amazonEC2.runInstances(runInstancesRequest);
String instance_id = runInstancesResult.getReservation().getReservationId();
//waiting for 2 minute
DescribeInstancesRequest describeInstancesRequest = new DescribeInstancesRequest();
describeInstancesRequest.setInstanceIds(Arrays.asList(instance_id));
DescribeInstancesResult describeInstancesResult = amazonEC2.describeInstances(describeInstancesRequest);
for(Reservation reservation : describeInstancesResult.getReservations()){
for(Instance instance : reservation.getInstances()) {
System.out.println(instance.getPublicDnsName());
}
}
Here I am able to get AWS EC2 instance up and running but the problem I am facing is I am not able to get the EC2 details using the RunInstancesResult object. As per AWS documentation it seems like instance_id is reservation_id but I believe it is not so. As instance_id start with "i-" and reservation_id with "r-".
How I can get the details of only one EC2 which I created using API? As I got RunInstancesResult object as output of the previous API hence the question: How I can get AWS EC2 instance details using RunInstancesRequest?
Reservations are the request to launch instances. For example, you could use one launch request to create two instances. Thus, the Reservation contains multiple Instances.
If you look in the response object, you will see that the Reservation does indeed contain multiple instances, eg:
{
"OwnerId": "123456789012",
"ReservationId": "r-08626e73c547023b1",
"Groups": [
{
"GroupName": "MySecurityGroup",
"GroupId": "sg-903004f8"
}
],
"Instances": [
{
"Monitoring": {
"State": "disabled"
},
"PublicDnsName": null,
"RootDeviceType": "ebs",
"State": {
"Code": 0,
"Name": "pending"
},
"EbsOptimized": false,
"LaunchTime": "2013-07-19T02:42:39.000Z",
"ProductCodes": [],
"StateTransitionReason": null,
"InstanceId": "i-1234567890abcdef0",
"ImageId": "ami-1a2b3c4d",
"PrivateDnsName": null,
"KeyName": "MyKeyPair",
etc.
There is small confusion created in AWS doc. They are refering reservation id as instance_id. After changing following changes in my code, was able to filter out the instances:
String instance_id = runInstancesResult.getReservation().getInstances()..get(0).getInstanceId();
I'm trying to setup an ubuntu server and login with a non-default user. I've used cloud-config with the user data to setup an initial user, and packer to provision the server:
system_info:
default_user:
name: my_user
shell: /bin/bash
home: /home/my_user
sudo: ['ALL=(ALL) NOPASSWD:ALL']
Packer logs in and provisions the server as my_user, but when I launch an instance from the AMI, AWS installs the authorized_keys files under /home/ubuntu/.ssh/
Packer config:
{
"variables": {
"aws_profile": ""
},
"builders": [{
"type": "amazon-ebs",
"profile": "{{user `aws_profile`}}",
"region": "eu-west-1",
"instance_type": "c5.large",
"source_ami_filter": {
"most_recent": true,
"owners": ["099720109477"],
"filters": {
"name": "*ubuntu-xenial-16.04-amd64-server-*",
"virtualization-type": "hvm",
"root-device-type": "ebs"
}
},
"ami_name": "my_ami_{{timestamp}}",
"ssh_username": "my_user",
"user_data_file": "cloud-config"
}],
"provisioners": [{
"type": "shell",
"pause_before": "10s",
"inline": [
"echo 'run some commands'"
]}
]
}
Once the server has launched, both ubuntu and my_user users exist in /etc/passwd:
my_user:1000:1002:Ubuntu:/home/my_user:/bin/bash
ubuntu:x:1001:1003:Ubuntu:/home/ubuntu:/bin/bash
At what point does the ubuntu user get created, and is there a way to install the authorized_keys file under /home/my_user/.ssh at launch instead of ubuntu?
To persist the default user when using the AMI to launch new EC2 instances from it you have to change the value is /etc/cloud/cloud.cfg and update this part:
system_info:
default_user:
# Update this!
name: ubuntu
You can add your public keys when you create the user using cloud-init. Here is how you do it.
users:
- name: <username>
groups: [ wheel ]
sudo: [ "ALL=(ALL) NOPASSWD:ALL" ]
shell: /bin/bash
ssh-authorized-keys:
- ssh-rsa AAAAB3Nz<your public key>...
Addding additional SSH user account with cloud-init
I'm building up a custom platform to run our application. We have default VPC deleted, so according to the documentation I have to specify the VPC and subnet id almost everywhere. So the command I run for ebp looks like following:
ebp create -v --vpc.id vpc-xxxxxxx --vpc.subnets subnet-xxxxxx --vpc.publicip{code}
The above spins up the pcakcer environment without any issue however when the packer start to build an instance I'm getting the following error:
2017-12-07 18:07:05 UTC+0100 ERROR [Instance: i-00f376be9fc2fea34] Command failed on instance. Return code: 1 Output: 'packer build' failed, the build log has been saved to '/var/log/packer-builder/XXX1.0.19-builder.log'. Hook /opt/elasticbeanstalk/hooks/packerbuild/build.rb failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
2017-12-07 18:06:55 UTC+0100 ERROR 'packer build' failed, the build log has been saved to '/var/log/packer-builder/XXX:1.0.19-builder.log'
2017-12-07 18:06:55 UTC+0100 ERROR Packer failed with error: '--> HVM AMI builder: VPCIdNotSpecified: No default VPC for this user status code: 400, request id: 28d94e8c-e24d-440f-9c64-88826e042e9d'{code}
Both the template and the platform.yaml specify vpc_id and subnet id, however this is not taken into account by packer.
platform.yaml:
version: "1.0"
provisioner:
type: packer
template: tomcat_platform.json
flavor: ubuntu1604
metadata:
maintainer: <Enter your contact details here>
description: Ubuntu running Tomcat
operating_system_name: Ubuntu Server
operating_system_version: 16.04 LTS
programming_language_name: Java
programming_language_version: 8
framework_name: Tomcat
framework_version: 7
app_server_name: "none"
app_server_version: "none"
option_definitions:
- namespace: "aws:elasticbeanstalk:container:custom:application"
option_name: "TOMCAT_START"
description: "Default application startup command"
default_value: ""
option_settings:
- namespace: "aws:ec2:vpc"
option_name: "VPCId"
value: "vpc-xxxxxxx"
- namespace: "aws:ec2:vpc"
option_name: "Subnets"
value: "subnet-xxxxxxx"
- namespace: "aws:elb:listener:80"
option_name: "InstancePort"
value: "8080"
- namespace: "aws:elasticbeanstalk:application"
option_name: "Application Healthcheck URL"
value: "TCP:8080"
tomcat_platform.json:
{
"variables": {
"platform_name": "{{env `AWS_EB_PLATFORM_NAME`}}",
"platform_version": "{{env `AWS_EB_PLATFORM_VERSION`}}",
"platform_arn": "{{env `AWS_EB_PLATFORM_ARN`}}"
},
"builders": [
{
"type": "amazon-ebs",
"region": "eu-west-1",
"source_ami": "ami-8fd760f6",
"instance_type": "t2.micro",
"ami_virtualization_type": "hvm",
"ssh_username": "admin",
"ami_name": "Tomcat running on Ubuntu Server 16.04 LTS (built on {{isotime \"20060102150405\"}})",
"ami_description": "Tomcat running on Ubuntu Server 16.04 LTS (built on {{isotime \"20060102150405\"}})",
"vpc_id": "vpc-xxxxxx",
"subnet_id": "subnet-xxxxxx",
"associate_public_ip_address": "true",
"tags": {
"eb_platform_name": "{{user `platform_name`}}",
"eb_platform_version": "{{user `platform_version`}}",
"eb_platform_arn": "{{user `platform_arn`}}"
}
}
],
"provisioners": [
{
"type": "file",
"source": "builder",
"destination": "/tmp/"
},
{
"type": "shell",
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo {{ .Path }}",
"scripts": [
"builder/builder.sh"
]
}
]
}
Appreciate any idea on how to make this work as expected. I found couple of issues with the Packer, but seems to be resolved on their side so the documentation just says that the template must specify target VPC and Subnet.
The AWS documentation is a little misleading in this instance. You do need a default VPC in order to create a custom platform. From what I've seen, this is because the VPC flags that you are passing in to the ebp create command aren't passed along to the packer process that actually builds the platform.
To get around the error, you can just create a new default VPC that you just use for custom platform creation.
Packer looks for a default VPC (default behavior of Packer) while creating the resources required for building a custom platform which includes launching an EC2 instance, creating a Security Group etc., However, if a default VPC is not present in the region (for example, if it is deleted), Packer Build Task would fail with the following error:
Packer failed with error: '--> HVM AMI builder: VPCIdNotSpecified: No default VPC for this user status code: 400, request id: xyx-yxyx-xyx'
To fix this error, use the following attributes in the "builders" section of the 'template.json' file for packer to use a custom VPC and Subnets while creating the resources :
▸ vpc_id
▸ subnet_id
I am following the instructions from http://docs.aws.amazon.com/vm-import/latest/userguide/import-vm-image.html to import an OVA. Here are the summarized steps I followed.
Step 1: Upload an OVA to S3 bucket.
Step 2: Create trust policy
Step 3: Create role policy
Step 4: Create containers.json with bucket name and ova filename.
Step 5: Run command for import-image
Command: aws ec2 import-image --description "My Unique OVA" --disk-containers file://containers.json
Step 6: Get the "ImportTaskId": "import-ami-fgi2cyyd" (in my case)
Step 7: Check status of import task
Error:
C:\Users\joe>aws ec2 describe-import-image-tasks --import-task-ids import-ami-fgi2cyyd
{
"ImportImageTasks": [
{
"Status": "deleted",
"SnapshotDetails": [
{
"UserBucket": {
"S3Bucket": "my_unique_bucket",
"S3Key": "my_unique_ova.ova"
},
"DiskImageSize": 2871726592.0,
"Format": "VMDK"
}
],
"Description": "My Unique OVA",
"StatusMessage": "ClientError: GRUB doesn't exist in /etc/default directory.",
"ImportTaskId": "import-ami-fgi2cyyd"
}
]
}
What am I doing wrong? I am on free-tier trying things out.
Contents of containers.json:
[
{
"Description": "My Unique OVA",
"Format": "ova",
"UserBucket": {
"S3Bucket": "my_unique_bucket",
"S3Key": "my_unique_ova.ova"
}
}]
The ova file was corrupted in my case. Tried it with a smaller ova and it worked fine.
Alright, figured it out. The problem I ran into, which I assume will be the case with yours as well is that you probably aren't using a grub loader but rather the lilo loader. I was able to alter the boot loader by going into the gui (startx) and going under system configuration. Under the Boot menu I was able to switch from lilo to Grub. Once I did that, I got further in the ec2 vm import process. Hope that helps.
I am using ansible cloudformation to create stack with 20 instances.
Now in the ansible output i can only see the instance ids.
Now after the stack is created i want to connect to them and configure it but i am not sure how can get thos ips or hostnames from instance id.
cloudformation output is like this
{
"last_updated_time": null,
"logical_resource_id": "test2",
"physical_resource_id": "i-24tf97306",
"resource_type": "AWS::EC2::Instance",
"status": "CREATE_COMPLETE",
"status_reason": null
},
{
"last_updated_time": null,
"logical_resource_id": "test1",
"physical_resource_id": "i-6533184348",
"resource_type": "AWS::EC2::Instance",
"status": "CREATE_COMPLETE",
"status_reason": null
}
ec2_remote_facts module is your friend here.
You can retrieve instance meta data when Ansible runs on the instance e.g.
curl http://169.254.169.254/latest/meta-data/public-hostname
ec2-aa-bb-cc-ddd.ap-southeast-2.compute.amazonaws.com
where aa-bb-cc-ddd represents your IP and the full string represents the hostname.
You're using Ansible so you could use Ansible's get_url module: http://docs.ansible.com/ansible/get_url_module.html to perform the HTTP request.