Packer credentials issue error "NoCredentialProviders"? - amazon-web-services

I'm trying to run a Packer template to build a basic AWS EBS based instance. I keep getting the following error however.
==> amazon-ebs: Error querying AMI: NoCredentialProviders: no valid providers in chain. Deprecated.
==> amazon-ebs: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Build 'amazon-ebs' errored: Error querying AMI: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I've got my credentials pulling from environment variables like this in the template.
{
"type": "amazon-ebs",
"access_key": "{{user `AWS_ACCESS_KEY_ID`}}",
"secret_key": "{{user `AWS_SECRET_ACCESS_KEY`}}",
"region": "us-east-1",
"source_ami": "ami-fce3c696",
"instance_type": "t2.micro",
"ssh_username": "broodadmin",
"ami_name": "broodbox"
}
I pulled the key and secret from AWS, and have setup the permissions for the group per instructions in the docs here. I've also insured the user is in the group that has all of these permissions set.
Seen this?
This is more of the template with the other google compute and variables in the template listed...
{
"variables": {
"account_json": "../../../secrets/account.json",
"instance_name": "broodbox",
"image_name": "broodbox"
},
"builders": [
{
"type": "googlecompute",
"account_file": "{{user `account_json`}}",
"project_id": "that-big-universe",
"source_image": "debian-8-jessie-v20160923",
"zone": "us-west1-a",
"instance_name": "{{user `instance_name`}}",
"image_name": "{{user `image_name`}}",
"image_description": "Node.js Server.",
"communicator": "ssh",
"ssh_username": "broodadmin"
},
{
"type": "amazon-ebs",
"access_key": "{{user `AWS_ACCESS_KEY_ID`}}",
"secret_key": "{{user `AWS_SECRET_ACCESS_KEY`}}",
"region": "us-east-1",
"source_ami": "ami-fce3c696",
"instance_type": "t2.micro",
"ssh_username": "broodadmin",
"ami_name": "broodbox"
}
],
"provisioners": [
The full template is here https://github.com/Adron/multi-cloud/blob/master/ecosystem/packer/nodejs_server.json
The repo is a mult-cloud repo I'm working on https://github.com/Adron/multi-cloud

I had a go at running the template, and the only way I could get it to return that error was if the environment variables didn't exist. Once i created them, I got a different error:
==> amazon-ebs: Error launching source instance: VPCResourceNotSpecified: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.
So, if you create the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, you should be fine. (I seem to recall that I've occasionally seen the variables called AWS_ACCESS_KEY and AWS_SECRET_KEY which has tripped me up in the past.)

Doing a bit of rubber ducking here. But you haven't added the user variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. You need to add these lines to you variables section:
"AWS_ACCESS_KEY_ID": "{{env `AWS_ACCESS_KEY_ID`}}",
"AWS_SECRET_ACCESS_KEY": "{{env `AWS_SECRET_ACCESS_KEY`}}"
See docs about environment-variables for more information.
But what you probably want to do is to just delete these two lines from your amazon-ebs builder section:
"access_key": "{{user `AWS_ACCESS_KEY_ID`}}",
"secret_key": "{{user `AWS_SECRET_ACCESS_KEY`}}",
Since then Packer will read them from the default environment variables (which you anyway are using). See Packer Documentation - AWS credentials.

I believe access_key and secret_key are not as required as the docs make them out to be. I would remove those properties from the builder and — as long as the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables are exported — the builder should pick them up. It will also use the default credential lookup strategy used by the AWS Go SDK to find ~/.aws/credentials, for example.

My decision for Linux:
export AWS_ACCESS_KEY_ID="XXX"
export AWS_SECRET_ACCESS_KEY="XXX"
and remove from config
"access_key": "{{user `AWS_ACCESS_KEY_ID`}}",
"secret_key": "{{user `AWS_SECRET_ACCESS_KEY`}}",

Related

AWS EC2 SSH key pair not added to authorized_keys after Packer AMI build

Context
We have an automated AMI building process done by Packer in order to setup our instance images after code changes, and assign them to our Load Balancer launch configuration for faster autoscaling.
Problem
Recently the instances launched with the development environment AMI build started refusing the corresponding private key. After attaching an instance with that same AMI in a public subnet, I connected through EC2 connect and noticed that the public key was not present in the /home/ec2-user/.ssh/authorized_keys file, even though it was added at launch time through the launch configuration or even manually through the console.
The only one present being the Packed temporary SSH key created during the AMI packaging.
Additional info
Note that the key pair is mentionned in the instance details as though it was present but it is NOT, which was tricky to debug because we tend to trust what the console tells us.
What's even more puzzling is that the same AMI build for QA environment (which is exactly the same apart from some application variables) does include the EC2 SSH key correctly, and we are using the same key for DEV & QA environments.
We're using the same Packer version (1.5.1) than ever to avoid inconsistencies, so it most likely cannot come from that, but I suspect it does come from the build since it does not happen with other AMIs.
If someone has a clue about what's going on, I'd be glad to know.
Thanks !
Edit
Since it was requested in the comment, I can show you the "anonymized" packer template, for confidentiality reasons I can't show you the ansible tasks or other details. However, note that this template is IDENTICAL to the one used for QA (same source code) which does not gives the error.
"variables": {
"aws_region": "{{env `AWS_REGION`}}",
"inventory_file": "{{env `Environment`}}",
"instance_type": "{{env `INSTANCE_TYPE`}}"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `aws_region`}}",
"vpc_id": "vpc-xxxxxxxxxxxxxxx",
"subnet_id": "subnet-xxxxxxxxxxxxxxxxxx",
"security_group_id": "sg-xxxxxxxxxxxxxxxxxxxxx",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "amzn2-ami-hvm-2.0.????????.?-x86_64-gp2",
"root-device-type": "ebs",
"state": "available"
},
"owners": "amazon",
"most_recent": true
},
"instance_type": "t3a.{{ user `instance_type` }}",
"ssh_username": "ec2-user",
"ami_name": "APP {{user `inventory_file`}}",
"force_deregister": "true",
"force_delete_snapshot": "true",
"ami_users": [
"xxxxxxxxxxxxxxxxx",
"xxxxxxxxxxxxxxxxx",
"xxxxxxxxxxxxxxxxx"
]
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sudo amazon-linux-extras install ansible2",
"mkdir -p /tmp/libs"
]
},
{
"type": "file",
"source": "../frontend",
"destination": "/tmp"
},
{
"type": "file",
"source": "../backend/target/",
"destination": "/tmp"
},
{
"type": "ansible-local",
"playbook_file": "./app/instance-setup/playbook/setup-instance.yml",
"inventory_file": "./app/instance-setup/playbook/{{user `inventory_file`}}",
"playbook_dir": "./app/instance-setup/playbook",
"extra_arguments": [
"--extra-vars",
"ENV={{user `inventory_file`}}",
"--limit",
"{{user `inventory_file`}}",
"-v"
]
},
{
"type": "shell",
"inline": [
"sudo rm -rf /tmp/*"
]
}
]
}

Trouble building an AWS AMI using Packer. Fails with: amazon-ebs: Waiting for SSH to become available

When building an amazon-ebs instance per directions here and here I built a configuration and encountered this problem.
I found a number of other google searches with similar problems but they didn't help.
What I found was odd - the instance was trying to connect to the private_ip of the spot instance that got launched.
I was seeing something like this:
==> amazon-ebs: Using ssh communicator to connect: 172.31.8.223
==> amazon-ebs: Waiting for SSH to become available...
Since I was not on the same local area network there's no route to connect to that address and eventually I got this error. I checked the instance on the dashboard - sure enough it was created and had a valid IP address. I was able to log into it but for some reason packer tries to connect to the private address.
'amazon-ebs' errored: Timeout waiting for SSH.
For what it's worth my configuration file was something like this:
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"ami_name": "{{user `ami_name`}}",
"instance_type": "{{user `aws_instance_type`}}",
"region": "{{user `aws_region`}}",
"secret_key": "{{user `aws_secret_key`}}",
"source_ami": "{{user `aws_ami_image`}}",
"ssh_username": "ubuntu",
"spot_price": "0.20",
"tags": {
"Name": "{{user `ami_name`}}-{{user `image_version`}}",
"OS_Version": "Ubuntu XYZ",
"Release": "XYZ",
"Description": "Ubuntu XYZ AMI for Me"
},
"user_data_file": "config/user-data.sh"
}
],
In my case this was fixed by adding the ssh_interface option into the amazon-ebs builder portion of my packer.json file:
It's unclear why this is necessary but packer started working for me once I did this. My resulting config looked something like this:
"builders": [
{
"type": "amazon-ebs",
"ssh_interface": "public_ip",
"access_key": "{{user `aws_access_key`}}",
"ami_name": "{{user `ami_name`}}",
"instance_type": "{{user `aws_instance_type`}}",
"region": "{{user `aws_region`}}",
"secret_key": "{{user `aws_secret_key`}}",
"source_ami": "{{user `aws_ami_image`}}",
"ssh_username": "ubuntu",
"spot_price": "0.20",
"tags": {
"Name": "{{user `ami_name`}}-{{user `image_version`}}",
"OS_Version": "Ubuntu XYZ",
"Release": "XYZ",
"Description": "Ubuntu XYZ AMI for Me"
},
"user_data_file": "config/user-data.sh"
}
],

Packer: How do I create an AWS AMI with multiple block devices having different kms keys

I am trying to use packer version 1.3.2 to bake an AMI with multiple block devices where each block device is encrypted with a different KMS key, which is different then the KMS key used to encrypt the boot device.
At first I started to think that maybe this isn't supported by AWS; however, using AWS console, I was able to launch an EC2 instance with and AMI having previously encrypted volumes and add another volume that used a different KMS key. Then create an AMI from it. I then used the new AMI to launch another EC2 instance and the different KMS keys were maintained. This is because it did create a new snapshot for the additional volume with the different KMS key.
I have attempted so many different variations using the amazon-ebs builder with combinations of ami_block_device_mappings in conjunction with launch_block_device_mappings. Any combination at best generates the final volume snapshots tied to the AMI using the boot KMS key. I noticed that if I specify the alternate kms_key_ids in the launch_block_device_mappings like the following:
"launch_block_device_mappings": [
{
"device_name": "/dev/sdb",
"volume_type": "gp2",
"volume_size": "{{user `var_volume_size`}}",
"delete_on_termination": true,
"kms_key_id": "{{user `kms_key_arn_var`}}",
"encrypted": true
},
{
"device_name": "/dev/sdc",
"volume_type": "gp2",
"volume_size": "{{user `varlog_volume_size`}}",
"delete_on_termination": true,
"kms_key_id": "{{user `kms_key_arn_varlog`}}",
"encrypted": true
}, ...
It creates temporary snapshots with the alternate kms key but they are replaced with new ones that are encrypted with the boot kms key for the final AMI, regardless of whether I also include ami_block_device_mappings or not. Even if I set delete_on_termination to false for the launch...
I then looked that this from another angle by trying to create the snapshots from EBS volumes separately from the amazon-ebs builder. Using the amazon-ebsvolume builder, I created empty EBS volumes:
"type": "amazon-ebsvolume",
...
"ebs_volumes": [
{
"device_name": "/dev/sdb",
"volume_type" : "{{user `var_volume_type`}}",
"volume_size": 10,
"delete_on_termination": false,
"kms_key_id": "{{user `kms_key_arn_var`}}",
"encrypted": true,
"tags" : {
"Name" : "starter-volume-var",
"purpose" : "starter"
}
},
{
"device_name": "/dev/sdc",
"volume_type" : "{{user `varlog_volume_type`}}",
"volume_size": 5,
"delete_on_termination": false,
"kms_key_id": "{{user `kms_key_arn_varlog`}}",
"encrypted": true,
"tags" : {
"Name" : "starter-volume-varlog",
"purpose" : "starter"
}
},...
And then created snapshots from them and then attempted to use the snapshot_id of those instead of creating volumes inline in the amazon-ebs
"launch_block_device_mappings": [
{
"device_name": "/dev/sdb",
"volume_type" : "{{user `var_volume_type`}}",
"snapshot_id": "snap-08f2bed8aaa964469",
"delete_on_termination": true
},
{
"device_name": "/dev/sdc",
"volume_type" : "{{user `varlog_volume_type`}}",
"snapshot_id": "snap-037a4a6255e8d161d",
"delete_on_termination": true
}
],..
Doing this I get the following error:
2018/11/01 03:04:23 ui error: ==> amazon-ebs: Error launching source instance: InvalidBlockDeviceMapping: snapshotId can only be modified on EBS devices
I tried repeating the encryption settings along with the snapshot_ids:
"launch_block_device_mappings": [
{
"device_name": "/dev/sdb",
"volume_type" : "{{user `var_volume_type`}}",
"snapshot_id": "snap-08f2bed8aaa964469",
"kms_key_id": "{{user `kms_key_arn_var`}}",
"encrypted": true,
"delete_on_termination": true
},
{
"device_name": "/dev/sdc",
"volume_type" : "{{user `varlog_volume_type`}}",
"snapshot_id": "snap-037a4a6255e8d161d",
"kms_key_id": "{{user `kms_key_arn_varlog`}}",
"encrypted": true,
"delete_on_termination": true
}
],...
This results in a different error:
==> amazon-ebs: Error launching source instance: InvalidParameterDependency: The parameter KmsKeyId requires the parameter Encrypted to be set.
But I clearly have "encrypted": true
I am running out of ideas and feel it's possible, just apparently not smart enough to see it.
Came here because I had the same problem. I fixed this by moving the device to /dev/xvdf.
Digging into this further the source AMI I was using has the following block mappings associated with it, these ephemeral disks were not displayed in the console so it took me a while to workout what was going on, a big clue was the fact I could mount the disk even before I defined it (I originally defined it as an AMI mapping rather than launch in error but had the mount in my scripts already)
Block devices: /dev/sda1=snap-0b399e12978e2290e:8:true:standard, /dev/xvdb=ephemeral0, /dev/xvdc=ephemeral1
I notice you have not listed the source AMI but hopefully this helps

How to create an EC2 machine with Packer?

In AWS to make a new AMI, I usually run commands manually to verify that they're working, and then I image that box to create an AMI. But there are alternatives like packer.io, what's a minimal working example for using this service to make a simple customized AMI?
https://github.com/devopracy/devopracy-base/blob/master/packer/base.json There's a packer file that looks very similar to what I use at work for a base image. It's not tested, but we can go into it a bit. The base image is my own base - all services are built using it as a source ami. That way I control my dependencies and ensure there's a consistent os under my services. You could simply add cookbooks from the chef supermarket to see how provisioning a service works with this file, or use this as a base. As a base you would make a similar, less detailed build for the service and call this as the source ami.
This first part declares the variables I use to pack. The variables are injected before the build from a bash file which I DON'T CHECK IN TO SOURCE CONTROL. I keep the bash script in my home directory and source it before calling packer build. Note there's a cookbook path for the chef provisioner. I use the base_dir as the location on my dev box or the build server. I use a bootstrap key to build; packer will make it's own key to ssh if you don't specify one, but it's nice to make a key on aws and then launch your builds with it. That makes debugging packer easier on the fly.
"variables": {
"aws_access_key_id": "{{env `AWS_ACCESS_KEY`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"ssh_private_key_file": "{{env `SSH_PRIVATE_KEY_FILE`}}",
"cookbook_path": "{{env `CLOUD_DIR`}}/ops/devopracy-base/cookbooks",
"base_dir": "{{env `CLOUD_DIR`}}"
},
The next part of the file has the builder. I use amazon-ebs at work and off work too, it's simpler to create one file, and often the larger instance types are only available as ebs. In this file, I resize the volume so we have a bit more room to install stuffs. Note the source ami isn't specified here, I lookup the newest version here or there. Ubuntu has a handy site if you're using it, just google ec2 ubuntu locator. You need to put in a source image to build on.
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key_id`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-west-2",
"source_ami": "",
"instance_type": "t2.small",
"ssh_username": "fedora",
"ami_name": "fedora-base-{{isotime \"2006-01-02\"}}",
"ami_description": "fedora 21 devopracy base",
"security_group_ids": [ "" ],
"force_deregister": "true",
"ssh_keypair_name": "bootstrap",
"ssh_private_key_file": "{{user `ssh_private_key_file`}}",
"subnet_id": "",
"ami_users": [""],
"ami_block_device_mappings": [{
"delete_on_termination": "true",
"device_name": "/dev/sda1",
"volume_size": 30
}],
"launch_block_device_mappings": [{
"delete_on_termination": "true",
"device_name": "/dev/sda1",
"volume_size": 30
}],
"tags": {
"stage": "dev",
"os": "fedora 21",
"release": "latest",
"role": "base",
"version": "0.0.1",
"lock": "none"
}
}],
It's very useful to tag your images when you start doing automations on the cloud. These tags are how you'll handle your deploys and such. fedora is the default user for fedora, ubuntu for ubuntu, ec2-user for amazon linux, etc. You can look those up in docs for your distro.
Likewise, you need to add a security group to this file, and a subnet to launch in. Packer will use the defaults in aws if you don't specify those but if you're building on a buildserver or a non-default vpc, you must specify. Force deregister will get rid of an ami with the same name on a successful build - I name by date, so I can iterate on the builds daily and not pile up a bunch of images.
Finally, I use the chef provisioner. I have the cookbook in another repo, and the path to it on the buildserver is a variable at the top. Here we're looking at chef-zero for provisioning, which is technically not supported but works fine with the chef client provisioner and a custom command. Beside the chef run, I do some scripts of my own, and follow it up by running serverspec tests to make sure everything is hunky dory.
"provisioners": [
{
"type": "shell",
"inline": [
]
},
{
"type": "shell",
"script": "{{user `base_dir`}}/ops/devopracy-base/files/ext_disk.sh"
},
{
"type": "shell",
"inline": [
"sudo reboot",
"sleep 30",
"sudo resize2fs /dev/xvda1"
]
},
{
"type": "shell",
"inline": [
"sudo mkdir -p /etc/chef && sudo chmod 777 /etc/chef",
"sudo mkdir -p /tmp/packer-chef-client && sudo chmod 777 /tmp/packer-chef-client"
]
},
{
"type": "file",
"source": "{{user `cookbook_path`}}",
"destination": "/etc/chef/cookbooks"
},
{
"type": "chef-client",
"execute_command": "cd /etc/chef && sudo chef-client --local-mode -c /tmp/packer-chef-client/client.rb -j /tmp/packer-chef-client/first-boot.json",
"server_url": "http://localhost:8889",
"skip_clean_node": "true",
"skip_clean_client": "true",
"run_list": ["recipe[devopracy-base::default]"]
},
{
"type": "file",
"source": "{{user `base_dir`}}/ops/devopracy-base/test/spec",
"destination": "/home/fedora/spec/"
},
{
"type": "file",
"source": "{{user `base_dir`}}/ops/devopracy-base/test/Rakefile",
"destination": "/home/fedora/Rakefile"
},
{
"type": "shell",
"inline": ["/opt/chef/embedded/bin/rake spec"]
},
{
"type": "shell",
"inline": ["sudo chmod 600 /etc/chef"]
}
]
}
Naturally there's some goofy business in here around the chmoding of the chef dir, and it's not obviously secure - I run my builds in a private subnet. I hope this helps you get off the ground with packer, which is actually an amazing piece of software, and good fun! Ping me in the comments with any questions, or hit me up on github. All the devopracy stuff is a WIP, but those files will probably mature when I get more time to work on it :P
Good Luck!

Is there a way to build ebs by packer for LDAP restricted ssh login

I want to build the ebs ami from source ami which restricted by LDAP.
So when I build with packer, I got ssh handshake failed error.
I try to ssh login to packer builder instance, but I got password prompt because of ldap restriction and can't login.
How can I build and provision with ssh login with packer?
{
"builders": [{
"type": "amazon-ebs",
"access_key": "{key}",
"secret_key": "{key}",
"region": "{region}",
"source_ami": "{myami}",
"instance_type": "m3.medium",
"ssh_username": "ec2-user",
"ssh_timeout": "10m",
"ami_name": "ami_from_packer {{timestamp}}",
"iam_instance_profile": "packer"
}]
}