I'm trying to setup an ubuntu server and login with a non-default user. I've used cloud-config with the user data to setup an initial user, and packer to provision the server:
system_info:
default_user:
name: my_user
shell: /bin/bash
home: /home/my_user
sudo: ['ALL=(ALL) NOPASSWD:ALL']
Packer logs in and provisions the server as my_user, but when I launch an instance from the AMI, AWS installs the authorized_keys files under /home/ubuntu/.ssh/
Packer config:
{
"variables": {
"aws_profile": ""
},
"builders": [{
"type": "amazon-ebs",
"profile": "{{user `aws_profile`}}",
"region": "eu-west-1",
"instance_type": "c5.large",
"source_ami_filter": {
"most_recent": true,
"owners": ["099720109477"],
"filters": {
"name": "*ubuntu-xenial-16.04-amd64-server-*",
"virtualization-type": "hvm",
"root-device-type": "ebs"
}
},
"ami_name": "my_ami_{{timestamp}}",
"ssh_username": "my_user",
"user_data_file": "cloud-config"
}],
"provisioners": [{
"type": "shell",
"pause_before": "10s",
"inline": [
"echo 'run some commands'"
]}
]
}
Once the server has launched, both ubuntu and my_user users exist in /etc/passwd:
my_user:1000:1002:Ubuntu:/home/my_user:/bin/bash
ubuntu:x:1001:1003:Ubuntu:/home/ubuntu:/bin/bash
At what point does the ubuntu user get created, and is there a way to install the authorized_keys file under /home/my_user/.ssh at launch instead of ubuntu?
To persist the default user when using the AMI to launch new EC2 instances from it you have to change the value is /etc/cloud/cloud.cfg and update this part:
system_info:
default_user:
# Update this!
name: ubuntu
You can add your public keys when you create the user using cloud-init. Here is how you do it.
users:
- name: <username>
groups: [ wheel ]
sudo: [ "ALL=(ALL) NOPASSWD:ALL" ]
shell: /bin/bash
ssh-authorized-keys:
- ssh-rsa AAAAB3Nz<your public key>...
Addding additional SSH user account with cloud-init
Related
I've created a simple VM in VirtualBox and installed Ubuntu, however, I am unable to import this to AWS and generate an AMI from it.
Operating system: Ubuntu 20.04.4 LTS
Kernel: Linux 5.4.0-104-generic
I've followed the steps provided according to the docs and setup role-policy.json & trust-policy.json:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html#vmimport-role
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
I keep running into the error:
{
"ImportImageTasks": [
{
"Description": "My server VM",
"ImportTaskId": "import-ami-xxx",
"SnapshotDetails": [
{
"DeviceName": "/dev/sde",
"DiskImageSize": 2362320896.0,
"Format": "VMDK",
"Status": "completed",
"Url": "s3://xxxx/simple-vm.ova",
"UserBucket": {
"S3Bucket": "xxx",
"S3Key": "simple-vm.ova"
}
}
],
"Status": "deleted",
"StatusMessage": "ClientError: We were unable to read your import's initramfs/initrd to determine what drivers your import requires to run in EC2.",
"Tags": []
}
]
}
I've tried changing disk to and from .vdi and .vmdk
I've tried disabling floppy drive and update initramfs
I ran into this error and was able to get around it by using import-snapshot instead of import-image. Then I could deploy the snapshot using the ordinary means of creating an image from the snapshot.
Does anyone know a way to install Cloudwatch agents automatically on EC2 instances while launching them through a launch template/configuration on terraform ?
I have just struggled through the process myself and would have benefited from a clear guide. So here's my attempt to provide one (for Amazon Linux 2 AMI):
Create your Cloudwatch agent configuration json file, which defines the metrics you want to collect. Easiest way is to SSH onto your EC2 instance and run this command to generate the file using the wizard: sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard. This is what my file looks like, it is the most basic config which only collects metrics on disk and memory usage every 60 seconds:
{
"agent": {
"metrics_collection_interval": 60,
"region": "eu-west-1",
"logfile": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log",
"run_as_user": "root"
},
"metrics": {
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
}
}
}
}
Create a shell script template file which will run when the EC2 instance is created. This is what mine looks like, it is called userdata.sh.tpl:
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Install Cloudwatch agent
sudo yum install -y amazon-cloudwatch-agent
# Write Cloudwatch agent configuration file
sudo cat >> /opt/aws/amazon-cloudwatch-agent/bin/config_temp.json <<EOF
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"metrics": {
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
}
}
}
}
EOF
# Start Cloudwatch agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json
--==BOUNDARY==--
Create a directory called templates in your terraform module directory and store the userdata.sh.tpl file in there.
Create a data block in the appropriate .tf file as follows:
data "template_file" "user_data" {
template = file("${path.module}/templates/userdata.sh.tpl")
vars = {
...
}
}
In your aws_launch_configuration block, pass in the following value for the user_data variable:
resource "aws_launch_configuration" "example" {
name = "example_server_name"
image_id = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
user_data = data.template_file.user_data.rendered
}
Add the CloudWatchAgentServerPolicy policy to the IAM role used by your EC2 server. This will give your role all the required service-level permissions e.g. "cloudwatch:PutMetricData".
Relaunch your EC2 server, and SSH on to check that the CloudWatch agent is installed and running using systemctl status amazon-cloudwatch-agent.service
Navigate to the CloudWatch UI and select Metrics from the left-hand menu. You should see CWAgent in the list of namespaces.
Yes this can be achieved with a Bash script (assuming Linux)
Steps to consider
Create UserData.sh file
Use templatefile to link userdata.sh to launch template
Write userdata to install AWS Cloudwatch agent (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-EC2-Instance.html)
Terminate/create instance
Check cloudwatch agent is installed, up and running systemctl status amazon-cloudwatch-agent
I'm building up a custom platform to run our application. We have default VPC deleted, so according to the documentation I have to specify the VPC and subnet id almost everywhere. So the command I run for ebp looks like following:
ebp create -v --vpc.id vpc-xxxxxxx --vpc.subnets subnet-xxxxxx --vpc.publicip{code}
The above spins up the pcakcer environment without any issue however when the packer start to build an instance I'm getting the following error:
2017-12-07 18:07:05 UTC+0100 ERROR [Instance: i-00f376be9fc2fea34] Command failed on instance. Return code: 1 Output: 'packer build' failed, the build log has been saved to '/var/log/packer-builder/XXX1.0.19-builder.log'. Hook /opt/elasticbeanstalk/hooks/packerbuild/build.rb failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
2017-12-07 18:06:55 UTC+0100 ERROR 'packer build' failed, the build log has been saved to '/var/log/packer-builder/XXX:1.0.19-builder.log'
2017-12-07 18:06:55 UTC+0100 ERROR Packer failed with error: '--> HVM AMI builder: VPCIdNotSpecified: No default VPC for this user status code: 400, request id: 28d94e8c-e24d-440f-9c64-88826e042e9d'{code}
Both the template and the platform.yaml specify vpc_id and subnet id, however this is not taken into account by packer.
platform.yaml:
version: "1.0"
provisioner:
type: packer
template: tomcat_platform.json
flavor: ubuntu1604
metadata:
maintainer: <Enter your contact details here>
description: Ubuntu running Tomcat
operating_system_name: Ubuntu Server
operating_system_version: 16.04 LTS
programming_language_name: Java
programming_language_version: 8
framework_name: Tomcat
framework_version: 7
app_server_name: "none"
app_server_version: "none"
option_definitions:
- namespace: "aws:elasticbeanstalk:container:custom:application"
option_name: "TOMCAT_START"
description: "Default application startup command"
default_value: ""
option_settings:
- namespace: "aws:ec2:vpc"
option_name: "VPCId"
value: "vpc-xxxxxxx"
- namespace: "aws:ec2:vpc"
option_name: "Subnets"
value: "subnet-xxxxxxx"
- namespace: "aws:elb:listener:80"
option_name: "InstancePort"
value: "8080"
- namespace: "aws:elasticbeanstalk:application"
option_name: "Application Healthcheck URL"
value: "TCP:8080"
tomcat_platform.json:
{
"variables": {
"platform_name": "{{env `AWS_EB_PLATFORM_NAME`}}",
"platform_version": "{{env `AWS_EB_PLATFORM_VERSION`}}",
"platform_arn": "{{env `AWS_EB_PLATFORM_ARN`}}"
},
"builders": [
{
"type": "amazon-ebs",
"region": "eu-west-1",
"source_ami": "ami-8fd760f6",
"instance_type": "t2.micro",
"ami_virtualization_type": "hvm",
"ssh_username": "admin",
"ami_name": "Tomcat running on Ubuntu Server 16.04 LTS (built on {{isotime \"20060102150405\"}})",
"ami_description": "Tomcat running on Ubuntu Server 16.04 LTS (built on {{isotime \"20060102150405\"}})",
"vpc_id": "vpc-xxxxxx",
"subnet_id": "subnet-xxxxxx",
"associate_public_ip_address": "true",
"tags": {
"eb_platform_name": "{{user `platform_name`}}",
"eb_platform_version": "{{user `platform_version`}}",
"eb_platform_arn": "{{user `platform_arn`}}"
}
}
],
"provisioners": [
{
"type": "file",
"source": "builder",
"destination": "/tmp/"
},
{
"type": "shell",
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo {{ .Path }}",
"scripts": [
"builder/builder.sh"
]
}
]
}
Appreciate any idea on how to make this work as expected. I found couple of issues with the Packer, but seems to be resolved on their side so the documentation just says that the template must specify target VPC and Subnet.
The AWS documentation is a little misleading in this instance. You do need a default VPC in order to create a custom platform. From what I've seen, this is because the VPC flags that you are passing in to the ebp create command aren't passed along to the packer process that actually builds the platform.
To get around the error, you can just create a new default VPC that you just use for custom platform creation.
Packer looks for a default VPC (default behavior of Packer) while creating the resources required for building a custom platform which includes launching an EC2 instance, creating a Security Group etc., However, if a default VPC is not present in the region (for example, if it is deleted), Packer Build Task would fail with the following error:
Packer failed with error: '--> HVM AMI builder: VPCIdNotSpecified: No default VPC for this user status code: 400, request id: xyx-yxyx-xyx'
To fix this error, use the following attributes in the "builders" section of the 'template.json' file for packer to use a custom VPC and Subnets while creating the resources :
▸ vpc_id
▸ subnet_id
I'm attempting to build an AWS AMI using both Packer and Ansible to provision my AMI. I'm getting stuck on being able to copy some local files to my newly spun up EC2 instance using Ansible. I'm using the copy module in Ansible to do this. Here's what my Ansible code looks like:
- name: Testing copy of the local remote file
copy:
src: /tmp/test.test
dest: /tmp
Here's the error I get:
amazon-ebs: TASK [Testing copy of the local remote file] ***********************************
amazon-ebs: fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to find '/tmp/test.test' in expected paths."}
I've verified that the file /tmp/test.test exists on my local machine from which Ansible is running.
For my host file I just have localhost in it since packer is telling Ansible everything it needs to know about where to run Ansible commands.
I'm not sure where to go from here or how to properly debug this error, so I'm hoping for a little help.
Here's what my Packer script looks like:
{
"variables": {
"aws_access_key": "{{env `access_key`}}",
"aws_secret_key": "{{env `secret_key`}}"
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-116d857a",
"instance_type": "t2.micro",
"ssh_username": "admin",
"ami_name": "generic_jenkins_image",
"ami_description": "Testing AMI building with Packer",
"vpc_id": "xxxxxxxx",
"subnet_id": "xxxxxxxx",
"associate_public_ip_address": "true",
"tags": {"Environment" : "Dev", "Product": "SharedOperations"}
}],
"provisioners": [
{
"type": "shell",
"inline": [
"sleep 30",
"sudo rm -f /var/lib/dpkg/lock",
"sudo apt-get update -y --fix-missing",
"sudo apt-get -y install libpq-dev python-dev libxml2-dev libxslt1-dev libldap2-dev libsasl2-dev libffi-dev gcc build-essential python-pip",
"sudo pip install ansible"
]
},
{
"type": "ansible-local",
"playbook_file": "ansible/main.yml"
}
]
}
And here's my entire Ansible file:
---
- hosts: all
sudo: yes
tasks:
- name: Testing copy of the local remote file
copy:
src: /tmp/test.test
dest: /tmp
You are using ansible-local provisioner which runs the playbooks directly on targets ("local" in HashiCorp's products like Vagrant, Packet is used to describe the point of view of the provisioned machine).
The target does not have the /tmp/test.test file, hence you get the error.
You actually want to run the playbook using the regular Ansible provisioner.
I am trying to setup an AWS AMI vagrant provision: http://www.packer.io/docs/builders/amazon-ebs.html
I am using the standard .json config:
{
"type": "amazon-instance",
"access_key": "YOUR KEY HERE",
"secret_key": "YOUR SECRET KEY HERE",
"region": "us-east-1",
"source_ami": "ami-d9d6a6b0",
"instance_type": "m1.small",
"ssh_username": "ubuntu",
"account_id": "0123-4567-0890",
"s3_bucket": "packer-images",
"x509_cert_path": "x509.cert",
"x509_key_path": "x509.key",
"x509_upload_path": "/tmp",
"ami_name": "packer-quick-start {{timestamp}}"
}
It connects fine, and I see it create the instance in my AWS account. However, I keep getting Timeout waiting for SSH as an error. What could be causing this problem and how can I resolve it?
As I mentioned in my comment above this is just because sometimes it takes more than a minute for an instance to launch and be SSH ready.
If you want you could set the timeout to be longer - the default timeout with packer is 1 minute.
So you could set it to 5 minutes by adding the following to your json config:
"ssh_timeout": "5m"