In AWS to make a new AMI, I usually run commands manually to verify that they're working, and then I image that box to create an AMI. But there are alternatives like packer.io, what's a minimal working example for using this service to make a simple customized AMI?
https://github.com/devopracy/devopracy-base/blob/master/packer/base.json There's a packer file that looks very similar to what I use at work for a base image. It's not tested, but we can go into it a bit. The base image is my own base - all services are built using it as a source ami. That way I control my dependencies and ensure there's a consistent os under my services. You could simply add cookbooks from the chef supermarket to see how provisioning a service works with this file, or use this as a base. As a base you would make a similar, less detailed build for the service and call this as the source ami.
This first part declares the variables I use to pack. The variables are injected before the build from a bash file which I DON'T CHECK IN TO SOURCE CONTROL. I keep the bash script in my home directory and source it before calling packer build. Note there's a cookbook path for the chef provisioner. I use the base_dir as the location on my dev box or the build server. I use a bootstrap key to build; packer will make it's own key to ssh if you don't specify one, but it's nice to make a key on aws and then launch your builds with it. That makes debugging packer easier on the fly.
"variables": {
"aws_access_key_id": "{{env `AWS_ACCESS_KEY`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"ssh_private_key_file": "{{env `SSH_PRIVATE_KEY_FILE`}}",
"cookbook_path": "{{env `CLOUD_DIR`}}/ops/devopracy-base/cookbooks",
"base_dir": "{{env `CLOUD_DIR`}}"
},
The next part of the file has the builder. I use amazon-ebs at work and off work too, it's simpler to create one file, and often the larger instance types are only available as ebs. In this file, I resize the volume so we have a bit more room to install stuffs. Note the source ami isn't specified here, I lookup the newest version here or there. Ubuntu has a handy site if you're using it, just google ec2 ubuntu locator. You need to put in a source image to build on.
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key_id`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-west-2",
"source_ami": "",
"instance_type": "t2.small",
"ssh_username": "fedora",
"ami_name": "fedora-base-{{isotime \"2006-01-02\"}}",
"ami_description": "fedora 21 devopracy base",
"security_group_ids": [ "" ],
"force_deregister": "true",
"ssh_keypair_name": "bootstrap",
"ssh_private_key_file": "{{user `ssh_private_key_file`}}",
"subnet_id": "",
"ami_users": [""],
"ami_block_device_mappings": [{
"delete_on_termination": "true",
"device_name": "/dev/sda1",
"volume_size": 30
}],
"launch_block_device_mappings": [{
"delete_on_termination": "true",
"device_name": "/dev/sda1",
"volume_size": 30
}],
"tags": {
"stage": "dev",
"os": "fedora 21",
"release": "latest",
"role": "base",
"version": "0.0.1",
"lock": "none"
}
}],
It's very useful to tag your images when you start doing automations on the cloud. These tags are how you'll handle your deploys and such. fedora is the default user for fedora, ubuntu for ubuntu, ec2-user for amazon linux, etc. You can look those up in docs for your distro.
Likewise, you need to add a security group to this file, and a subnet to launch in. Packer will use the defaults in aws if you don't specify those but if you're building on a buildserver or a non-default vpc, you must specify. Force deregister will get rid of an ami with the same name on a successful build - I name by date, so I can iterate on the builds daily and not pile up a bunch of images.
Finally, I use the chef provisioner. I have the cookbook in another repo, and the path to it on the buildserver is a variable at the top. Here we're looking at chef-zero for provisioning, which is technically not supported but works fine with the chef client provisioner and a custom command. Beside the chef run, I do some scripts of my own, and follow it up by running serverspec tests to make sure everything is hunky dory.
"provisioners": [
{
"type": "shell",
"inline": [
]
},
{
"type": "shell",
"script": "{{user `base_dir`}}/ops/devopracy-base/files/ext_disk.sh"
},
{
"type": "shell",
"inline": [
"sudo reboot",
"sleep 30",
"sudo resize2fs /dev/xvda1"
]
},
{
"type": "shell",
"inline": [
"sudo mkdir -p /etc/chef && sudo chmod 777 /etc/chef",
"sudo mkdir -p /tmp/packer-chef-client && sudo chmod 777 /tmp/packer-chef-client"
]
},
{
"type": "file",
"source": "{{user `cookbook_path`}}",
"destination": "/etc/chef/cookbooks"
},
{
"type": "chef-client",
"execute_command": "cd /etc/chef && sudo chef-client --local-mode -c /tmp/packer-chef-client/client.rb -j /tmp/packer-chef-client/first-boot.json",
"server_url": "http://localhost:8889",
"skip_clean_node": "true",
"skip_clean_client": "true",
"run_list": ["recipe[devopracy-base::default]"]
},
{
"type": "file",
"source": "{{user `base_dir`}}/ops/devopracy-base/test/spec",
"destination": "/home/fedora/spec/"
},
{
"type": "file",
"source": "{{user `base_dir`}}/ops/devopracy-base/test/Rakefile",
"destination": "/home/fedora/Rakefile"
},
{
"type": "shell",
"inline": ["/opt/chef/embedded/bin/rake spec"]
},
{
"type": "shell",
"inline": ["sudo chmod 600 /etc/chef"]
}
]
}
Naturally there's some goofy business in here around the chmoding of the chef dir, and it's not obviously secure - I run my builds in a private subnet. I hope this helps you get off the ground with packer, which is actually an amazing piece of software, and good fun! Ping me in the comments with any questions, or hit me up on github. All the devopracy stuff is a WIP, but those files will probably mature when I get more time to work on it :P
Good Luck!
Related
Context
We have an automated AMI building process done by Packer in order to setup our instance images after code changes, and assign them to our Load Balancer launch configuration for faster autoscaling.
Problem
Recently the instances launched with the development environment AMI build started refusing the corresponding private key. After attaching an instance with that same AMI in a public subnet, I connected through EC2 connect and noticed that the public key was not present in the /home/ec2-user/.ssh/authorized_keys file, even though it was added at launch time through the launch configuration or even manually through the console.
The only one present being the Packed temporary SSH key created during the AMI packaging.
Additional info
Note that the key pair is mentionned in the instance details as though it was present but it is NOT, which was tricky to debug because we tend to trust what the console tells us.
What's even more puzzling is that the same AMI build for QA environment (which is exactly the same apart from some application variables) does include the EC2 SSH key correctly, and we are using the same key for DEV & QA environments.
We're using the same Packer version (1.5.1) than ever to avoid inconsistencies, so it most likely cannot come from that, but I suspect it does come from the build since it does not happen with other AMIs.
If someone has a clue about what's going on, I'd be glad to know.
Thanks !
Edit
Since it was requested in the comment, I can show you the "anonymized" packer template, for confidentiality reasons I can't show you the ansible tasks or other details. However, note that this template is IDENTICAL to the one used for QA (same source code) which does not gives the error.
"variables": {
"aws_region": "{{env `AWS_REGION`}}",
"inventory_file": "{{env `Environment`}}",
"instance_type": "{{env `INSTANCE_TYPE`}}"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `aws_region`}}",
"vpc_id": "vpc-xxxxxxxxxxxxxxx",
"subnet_id": "subnet-xxxxxxxxxxxxxxxxxx",
"security_group_id": "sg-xxxxxxxxxxxxxxxxxxxxx",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "amzn2-ami-hvm-2.0.????????.?-x86_64-gp2",
"root-device-type": "ebs",
"state": "available"
},
"owners": "amazon",
"most_recent": true
},
"instance_type": "t3a.{{ user `instance_type` }}",
"ssh_username": "ec2-user",
"ami_name": "APP {{user `inventory_file`}}",
"force_deregister": "true",
"force_delete_snapshot": "true",
"ami_users": [
"xxxxxxxxxxxxxxxxx",
"xxxxxxxxxxxxxxxxx",
"xxxxxxxxxxxxxxxxx"
]
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sudo amazon-linux-extras install ansible2",
"mkdir -p /tmp/libs"
]
},
{
"type": "file",
"source": "../frontend",
"destination": "/tmp"
},
{
"type": "file",
"source": "../backend/target/",
"destination": "/tmp"
},
{
"type": "ansible-local",
"playbook_file": "./app/instance-setup/playbook/setup-instance.yml",
"inventory_file": "./app/instance-setup/playbook/{{user `inventory_file`}}",
"playbook_dir": "./app/instance-setup/playbook",
"extra_arguments": [
"--extra-vars",
"ENV={{user `inventory_file`}}",
"--limit",
"{{user `inventory_file`}}",
"-v"
]
},
{
"type": "shell",
"inline": [
"sudo rm -rf /tmp/*"
]
}
]
}
"AppLaunchConfig": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"AssociatePublicIpAddress": true,
"EbsOptimized": false,
"ImageId": {
"Ref": "amiID"
},
"InstanceType": "t3.small",
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"har-extractor /home/ubuntu/work/git.codavel.com.har --output /home/ubuntu/extract/\n"
]]
}
},
"SecurityGroups": [
{
"Ref": "InstanceSecGroup"
}
],
}
},
Hi Team,
This is my cloud-formation template for auto-scaling and it is working properly but the thing is I am running one command in Userdata that is not working. I have tried every possible thing but didn't work at all. And If I am running this manually this command is working.
So please help me out in this how will I resolve this issue. I am running this command on Ubuntu machine.
You can debug this by taking a look at the /var/log/cloud-init-output.log file which contains the output for your Linux Userdata commands.
If this does not provide any useful debug, the next necessary step would be try running the command as root which mimics exactly what the functionality carries out.
I'm trying to run a Packer template to build a basic AWS EBS based instance. I keep getting the following error however.
==> amazon-ebs: Error querying AMI: NoCredentialProviders: no valid providers in chain. Deprecated.
==> amazon-ebs: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Build 'amazon-ebs' errored: Error querying AMI: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I've got my credentials pulling from environment variables like this in the template.
{
"type": "amazon-ebs",
"access_key": "{{user `AWS_ACCESS_KEY_ID`}}",
"secret_key": "{{user `AWS_SECRET_ACCESS_KEY`}}",
"region": "us-east-1",
"source_ami": "ami-fce3c696",
"instance_type": "t2.micro",
"ssh_username": "broodadmin",
"ami_name": "broodbox"
}
I pulled the key and secret from AWS, and have setup the permissions for the group per instructions in the docs here. I've also insured the user is in the group that has all of these permissions set.
Seen this?
This is more of the template with the other google compute and variables in the template listed...
{
"variables": {
"account_json": "../../../secrets/account.json",
"instance_name": "broodbox",
"image_name": "broodbox"
},
"builders": [
{
"type": "googlecompute",
"account_file": "{{user `account_json`}}",
"project_id": "that-big-universe",
"source_image": "debian-8-jessie-v20160923",
"zone": "us-west1-a",
"instance_name": "{{user `instance_name`}}",
"image_name": "{{user `image_name`}}",
"image_description": "Node.js Server.",
"communicator": "ssh",
"ssh_username": "broodadmin"
},
{
"type": "amazon-ebs",
"access_key": "{{user `AWS_ACCESS_KEY_ID`}}",
"secret_key": "{{user `AWS_SECRET_ACCESS_KEY`}}",
"region": "us-east-1",
"source_ami": "ami-fce3c696",
"instance_type": "t2.micro",
"ssh_username": "broodadmin",
"ami_name": "broodbox"
}
],
"provisioners": [
The full template is here https://github.com/Adron/multi-cloud/blob/master/ecosystem/packer/nodejs_server.json
The repo is a mult-cloud repo I'm working on https://github.com/Adron/multi-cloud
I had a go at running the template, and the only way I could get it to return that error was if the environment variables didn't exist. Once i created them, I got a different error:
==> amazon-ebs: Error launching source instance: VPCResourceNotSpecified: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.
So, if you create the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, you should be fine. (I seem to recall that I've occasionally seen the variables called AWS_ACCESS_KEY and AWS_SECRET_KEY which has tripped me up in the past.)
Doing a bit of rubber ducking here. But you haven't added the user variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. You need to add these lines to you variables section:
"AWS_ACCESS_KEY_ID": "{{env `AWS_ACCESS_KEY_ID`}}",
"AWS_SECRET_ACCESS_KEY": "{{env `AWS_SECRET_ACCESS_KEY`}}"
See docs about environment-variables for more information.
But what you probably want to do is to just delete these two lines from your amazon-ebs builder section:
"access_key": "{{user `AWS_ACCESS_KEY_ID`}}",
"secret_key": "{{user `AWS_SECRET_ACCESS_KEY`}}",
Since then Packer will read them from the default environment variables (which you anyway are using). See Packer Documentation - AWS credentials.
I believe access_key and secret_key are not as required as the docs make them out to be. I would remove those properties from the builder and — as long as the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables are exported — the builder should pick them up. It will also use the default credential lookup strategy used by the AWS Go SDK to find ~/.aws/credentials, for example.
My decision for Linux:
export AWS_ACCESS_KEY_ID="XXX"
export AWS_SECRET_ACCESS_KEY="XXX"
and remove from config
"access_key": "{{user `AWS_ACCESS_KEY_ID`}}",
"secret_key": "{{user `AWS_SECRET_ACCESS_KEY`}}",
I am baking AWS AMIs using Packer and Ansible. It seems that after about 6 minutes or so of Packer running, the process will fail. Aside from it taking about 6 minutes, I can't seem to find a logical explanation as to what is happening. The Ansbile playbook will fail at different points along the way but always about 6 minutes after I launch Packer.
I always get an Ansible error when I hit this issue. I either get Timeout (12s) waiting for privilege escalation prompt: or Connection to 127.0.0.1 closed.\r\n
Is there a way to extend the timeouts associated with a playbook or Packer builder?
Packer file contents:
{
"provisioners": [{
"type": "ansible",
"playbook_file": "../ansible/nexus.yml"
}],
"builders": [{
"type": "amazon-ebs",
"region": "us-east-2",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "amzn-ami-hvm*-gp2",
"root-device-type": "ebs"
},
"owners": ["137112412989"],
"most_recent": true
},
"instance_type": "t2.medium",
"ami_virtualization_type": "hvm",
"ssh_username": "ec2-user",
"ami_name": "Nexus (Latest Amazon Linux Base) {{isotime \"2006-01-02T15-04-05-06Z\"| clean_ami_name}}",
"ami_description": "Nexus AMI",
"run_tags": {
"AmiName": "Nexus",
"AmiCreatedBy": "Packer"
},
"tags": {
"Name": "Nexus",
"CreatedBy": "Packer"
}
}]
}
I solved this issue adding following parameters at Ansible configuration file:
[defaults]
forks = 1
[ssh_connection]
ssh_args = -o ControlMaster=no -o ControlPath=none -o ControlPersist=no
pipelining = false
Hope it helps.
While creating along my AWS CloudFormation template I hit the 16KB limitation of user data.... and then I found out I can put the script in S3 (with all my user data) and copy that file over as part of user data and run that script... but my question is how can I take the parameters that I am passing into CloudFormation like below and pass that into the file/script/userdata that I download from S3 that I will run? So how can I pass the parameters from CloudFormation into /root/usr.sh script?
Here is my user data:
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"",
[
"#!/bin/bash -x\n\n",
"yum -y install tcsh lvm2 sysstat\n\n\n",
"# AWS CLI download and Installation\n",
"curl \"https://s3.amazonaws.com/aws-cli/awscli-bundle.zip\" -o \"/usr/awscli-bundle.zip\"\n",
"unzip /usr/awscli-bundle.zip -d /usr/awscmdline/\n",
"/usr/awscmdline/awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws\n",
"/usr/local/aws/bin/aws configure set region ",
{
"Ref": "AWS::Region"
},
"\n",
"/usr/local/bin/aws s3 cp s3://test123/usr.sh /root/usr.sh \n",
"chmod 744 /root/usr.sh \n",
"/root/usr.sh"
]
]
}
}
and here are the sample parameters:
"Parameters": {
"SelectInstanceType": {
"Description": "EC2 instance type",
"Type": "String",
"Default": "r3.8xlarge",
"AllowedValues": [
"r3.large",
"r3.xlarge",
"r3.2xlarge",
"r3.4xlarge",
"r3.8xlarge",
"c4.large",
"c4.xlarge",
"c4.2xlarge",
"c4.4xlarge",
"c4.8xlarge"
],
"ConstraintDescription": "Must be a valid EC2 instance type."
},
"Keyname": {
"Description": "Keypair to use to launch the instance",
"Type": "AWS::EC2::KeyPair::KeyName"
},
"IPAddress": {
"Description": "Private IP",
"Type": "String",
"Default": "10.10.10.X"
},
There's a few ways you could do it...
Configurations in a file
You could create a file with your configurations and then read the file from your script. For an example see: Setting environment variables with user-data
Set environment variables
As part of your User Data script, before you download and call a script, set environment variables (also in the above example file).
Pass parameters when executing your script
When downloading a script from Amazon S3 and the calling it, append parameters in the same way that your script is currently inserting AWS::Region. Your script will then need to read those parameters from the command-line.
Refer to parameters like this: { "Ref" : "InstanceTypeParameter" }
See: CloudFormation Parameters documentation