I'm building up a custom platform to run our application. We have default VPC deleted, so according to the documentation I have to specify the VPC and subnet id almost everywhere. So the command I run for ebp looks like following:
ebp create -v --vpc.id vpc-xxxxxxx --vpc.subnets subnet-xxxxxx --vpc.publicip{code}
The above spins up the pcakcer environment without any issue however when the packer start to build an instance I'm getting the following error:
2017-12-07 18:07:05 UTC+0100 ERROR [Instance: i-00f376be9fc2fea34] Command failed on instance. Return code: 1 Output: 'packer build' failed, the build log has been saved to '/var/log/packer-builder/XXX1.0.19-builder.log'. Hook /opt/elasticbeanstalk/hooks/packerbuild/build.rb failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
2017-12-07 18:06:55 UTC+0100 ERROR 'packer build' failed, the build log has been saved to '/var/log/packer-builder/XXX:1.0.19-builder.log'
2017-12-07 18:06:55 UTC+0100 ERROR Packer failed with error: '--> HVM AMI builder: VPCIdNotSpecified: No default VPC for this user status code: 400, request id: 28d94e8c-e24d-440f-9c64-88826e042e9d'{code}
Both the template and the platform.yaml specify vpc_id and subnet id, however this is not taken into account by packer.
platform.yaml:
version: "1.0"
provisioner:
type: packer
template: tomcat_platform.json
flavor: ubuntu1604
metadata:
maintainer: <Enter your contact details here>
description: Ubuntu running Tomcat
operating_system_name: Ubuntu Server
operating_system_version: 16.04 LTS
programming_language_name: Java
programming_language_version: 8
framework_name: Tomcat
framework_version: 7
app_server_name: "none"
app_server_version: "none"
option_definitions:
- namespace: "aws:elasticbeanstalk:container:custom:application"
option_name: "TOMCAT_START"
description: "Default application startup command"
default_value: ""
option_settings:
- namespace: "aws:ec2:vpc"
option_name: "VPCId"
value: "vpc-xxxxxxx"
- namespace: "aws:ec2:vpc"
option_name: "Subnets"
value: "subnet-xxxxxxx"
- namespace: "aws:elb:listener:80"
option_name: "InstancePort"
value: "8080"
- namespace: "aws:elasticbeanstalk:application"
option_name: "Application Healthcheck URL"
value: "TCP:8080"
tomcat_platform.json:
{
"variables": {
"platform_name": "{{env `AWS_EB_PLATFORM_NAME`}}",
"platform_version": "{{env `AWS_EB_PLATFORM_VERSION`}}",
"platform_arn": "{{env `AWS_EB_PLATFORM_ARN`}}"
},
"builders": [
{
"type": "amazon-ebs",
"region": "eu-west-1",
"source_ami": "ami-8fd760f6",
"instance_type": "t2.micro",
"ami_virtualization_type": "hvm",
"ssh_username": "admin",
"ami_name": "Tomcat running on Ubuntu Server 16.04 LTS (built on {{isotime \"20060102150405\"}})",
"ami_description": "Tomcat running on Ubuntu Server 16.04 LTS (built on {{isotime \"20060102150405\"}})",
"vpc_id": "vpc-xxxxxx",
"subnet_id": "subnet-xxxxxx",
"associate_public_ip_address": "true",
"tags": {
"eb_platform_name": "{{user `platform_name`}}",
"eb_platform_version": "{{user `platform_version`}}",
"eb_platform_arn": "{{user `platform_arn`}}"
}
}
],
"provisioners": [
{
"type": "file",
"source": "builder",
"destination": "/tmp/"
},
{
"type": "shell",
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo {{ .Path }}",
"scripts": [
"builder/builder.sh"
]
}
]
}
Appreciate any idea on how to make this work as expected. I found couple of issues with the Packer, but seems to be resolved on their side so the documentation just says that the template must specify target VPC and Subnet.
The AWS documentation is a little misleading in this instance. You do need a default VPC in order to create a custom platform. From what I've seen, this is because the VPC flags that you are passing in to the ebp create command aren't passed along to the packer process that actually builds the platform.
To get around the error, you can just create a new default VPC that you just use for custom platform creation.
Packer looks for a default VPC (default behavior of Packer) while creating the resources required for building a custom platform which includes launching an EC2 instance, creating a Security Group etc., However, if a default VPC is not present in the region (for example, if it is deleted), Packer Build Task would fail with the following error:
Packer failed with error: '--> HVM AMI builder: VPCIdNotSpecified: No default VPC for this user status code: 400, request id: xyx-yxyx-xyx'
To fix this error, use the following attributes in the "builders" section of the 'template.json' file for packer to use a custom VPC and Subnets while creating the resources :
▸ vpc_id
▸ subnet_id
Related
I'm trying to setup an ubuntu server and login with a non-default user. I've used cloud-config with the user data to setup an initial user, and packer to provision the server:
system_info:
default_user:
name: my_user
shell: /bin/bash
home: /home/my_user
sudo: ['ALL=(ALL) NOPASSWD:ALL']
Packer logs in and provisions the server as my_user, but when I launch an instance from the AMI, AWS installs the authorized_keys files under /home/ubuntu/.ssh/
Packer config:
{
"variables": {
"aws_profile": ""
},
"builders": [{
"type": "amazon-ebs",
"profile": "{{user `aws_profile`}}",
"region": "eu-west-1",
"instance_type": "c5.large",
"source_ami_filter": {
"most_recent": true,
"owners": ["099720109477"],
"filters": {
"name": "*ubuntu-xenial-16.04-amd64-server-*",
"virtualization-type": "hvm",
"root-device-type": "ebs"
}
},
"ami_name": "my_ami_{{timestamp}}",
"ssh_username": "my_user",
"user_data_file": "cloud-config"
}],
"provisioners": [{
"type": "shell",
"pause_before": "10s",
"inline": [
"echo 'run some commands'"
]}
]
}
Once the server has launched, both ubuntu and my_user users exist in /etc/passwd:
my_user:1000:1002:Ubuntu:/home/my_user:/bin/bash
ubuntu:x:1001:1003:Ubuntu:/home/ubuntu:/bin/bash
At what point does the ubuntu user get created, and is there a way to install the authorized_keys file under /home/my_user/.ssh at launch instead of ubuntu?
To persist the default user when using the AMI to launch new EC2 instances from it you have to change the value is /etc/cloud/cloud.cfg and update this part:
system_info:
default_user:
# Update this!
name: ubuntu
You can add your public keys when you create the user using cloud-init. Here is how you do it.
users:
- name: <username>
groups: [ wheel ]
sudo: [ "ALL=(ALL) NOPASSWD:ALL" ]
shell: /bin/bash
ssh-authorized-keys:
- ssh-rsa AAAAB3Nz<your public key>...
Addding additional SSH user account with cloud-init
I am using Elastic Beanstalk's aws:elasticbeanstalk:application:environment namespace to configure my environment with env vars. How can I set different values for different environments (e.g. development versus production)?
Development:
option_settings:
aws:elasticbeanstalk:application:environment:
REDIS_HOST: localhost
Production:
option_settings:
aws:elasticbeanstalk:application:environment:
REDIS_HOST: prod.redis.server.com
The AWS CLI has a convenient way of doing this for you as the update-environment command allows you to set env vars from a specially formatted json file. Create a separate json file for each environment you will be deploying to.
Example json file named deploy-dev.json:
[
{
"Namespace": "aws:elasticbeanstalk:application:environment",
"OptionName": "NODE_ENV",
"Value": "dev"
},
{
"Namespace": "aws:elasticbeanstalk:application:environment",
"OptionName": "LOG_LEVEL",
"Value": "silly"
}
]
Deploy app and then update env vars:
aws elasticbeanstalk create-application-version --application-name "$EB_APP_NAME" --version-label "$EB_VERSION"
aws elasticbeanstalk update-environment --environment-name "$EB_ENV_NAME" --version-label "$EB_VERSION" --option-settings file://.ebextensions/deploy-dev.json
How it's changed depends on your deployment method.
One option you can use is to set a bogus value in your .config file:
option_settings:
aws:elasticbeanstalk:application:environment:
REDIS_HOST: change me
Then, after deployment, modify the variable either using the AWS Management Console or using the EB CLI:
eb setenv REDIS_HOST=prod.redis.server.com
If you are using CloudFormation to deploy your EB application, you can feed the value as part of the OptionSettings field in your CloudFormation template:
"EBConfigurationTemplate" : {
"Type" : "AWS::ElasticBeanstalk::ConfigurationTemplate",
"Properties" : {
"ApplicationName" : {
"Ref" : "EBApplication"
},
"Description" : "Configuration Template",
"OptionSettings" : [
{
"Namespace" : "aws:elasticbeanstalk:application:environment",
"OptionName" : "REDIS_HOST",
"Value" : {
"Ref" : "RedisHostInputParameter"
}
},
]
There are probably other methods too, but they will depend on the method of deployment.
Today when I launch an app using kubernetes over aws it exposes a publicly visible LoadBalancer Ingress URL, however to link that to my domain to make the app accessible to the public, I need to manually go into the aws route53 console in a browser on every launch. Can I update the aws route53 Resource Type A to match the latest Kubernetes LoadBalancer Ingress URL from the command line ?
Kubernetes over gcloud shares this challenge of having to either predefine a Static IP which is used in launch config or manually do a browser based domain linkage post launch. On aws I was hoping I could use something similar to this from the command line
aws route53domains update-domain-nameservers ???
__ OR __ can I predefine an aws kubernetes LoadBalancer Ingress similar to doing a predefined Static IP when over gcloud ?
to show the deployed app's LoadBalancer Ingress URL issue
kubectl describe svc
... output
Name: aaa-deployment-407
Namespace: ruptureofthemundaneplane
Labels: app=bbb
pod-template-hash=4076262206
Selector: app=bbb,pod-template-hash=4076262206
Type: LoadBalancer
IP: 10.0.51.82
LoadBalancer Ingress: a244bodhisattva79c17cf7-61619.us-east-1.elb.amazonaws.com
Port: port-1 80/TCP
NodePort: port-1 32547/TCP
Endpoints: 10.201.0.3:80
Port: port-2 443/TCP
NodePort: port-2 31248/TCP
Endpoints: 10.201.0.3:443
Session Affinity: None
No events.
UPDATE:
Getting error trying new command line technique (hat tip to #error2007s comment) ... issue this
aws route53 list-hosted-zones
... outputs
{
"HostedZones": [
{
"ResourceRecordSetCount": 6,
"CallerReference": "2D58A764-1FAC-DEB4-8AC7-AD37E74B94E6",
"Config": {
"PrivateZone": false
},
"Id": "/hostedzone/Z3II3949ZDMDXV",
"Name": "chainsawhaircut.com."
}
]
}
Important bit used below : hostedzone Z3II3949ZDMDXV
now I craft following using this Doc (and this Doc as well) as file /change-resource-record-sets.json (NOTE I can successfully change Type A using a similar cli call ... however I need to change Type A with an Alias Target of LoadBalancer Ingress URL)
{
"Comment": "Update record to reflect new IP address of fresh deploy",
"Changes": [{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "chainsawhaircut.com.",
"Type": "A",
"TTL": 60,
"AliasTarget": {
"HostedZoneId": "Z3II3949ZDMDXV",
"DNSName": "a244bodhisattva79c17cf7-61619.us-east-1.elb.amazonaws.com",
"EvaluateTargetHealth": false
}
}
}]
}
on command line I then issue
aws route53 change-resource-record-sets --hosted-zone-id Z3II3949ZDMDXV --change-batch file:///change-resource-record-sets.json
which give this error message
An error occurred (InvalidInput) when calling the ChangeResourceRecordSets operation: Invalid request
Any insights ?
Here is the logic needed to update aws route53 Resource Record Type A with value from freshly minted kubernetes LoadBalancer Ingress URL
step 1 - identify your hostedzone Id by issuing
aws route53 list-hosted-zones
... from output here is clip for my domain
"Id": "/hostedzone/Z3II3949ZDMDXV",
... importantly never populate json with hostedzone Z3II3949ZDMDXV its only used as a cli parm ... there is a second similarly named token HostedZoneId which is entirely different
step 2 - see current value of your route53 domain record ... issue :
aws route53 list-resource-record-sets --hosted-zone-id Z3II3949ZDMDXV --query "ResourceRecordSets[?Name == 'scottstensland.com.']"
... output
[
{
"AliasTarget": {
"HostedZoneId": "Z35SXDOTRQ7X7K",
"EvaluateTargetHealth": false,
"DNSName": "dualstack.asomepriorvalue39e7db-1867261689.us-east-1.elb.amazonaws.com."
},
"Type": "A",
"Name": "scottstensland.com."
},
{
"ResourceRecords": [
{
"Value": "ns-1238.awsdns-26.org."
},
{
"Value": "ns-201.awsdns-25.com."
},
{
"Value": "ns-969.awsdns-57.net."
},
{
"Value": "ns-1823.awsdns-35.co.uk."
}
],
"Type": "NS",
"Name": "scottstensland.com.",
"TTL": 172800
},
{
"ResourceRecords": [
{
"Value": "ns-1238.awsdns-26.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400"
}
],
"Type": "SOA",
"Name": "scottstensland.com.",
"TTL": 900
}
]
... in above notice value of
"HostedZoneId": "Z35SXDOTRQ7X7K",
which is the second similarly name token Do NOT use wrong Hosted Zone ID
step 3 - put below into your change file aws_route53_type_A.json (for syntax Doc see link mentioned in comment above)
{
"Comment": "Update record to reflect new DNSName of fresh deploy",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"AliasTarget": {
"HostedZoneId": "Z35SXDOTRQ7X7K",
"EvaluateTargetHealth": false,
"DNSName": "dualstack.a0b82c81f47d011e6b98a0a28439e7db-1867261689.us-east-1.elb.amazonaws.com."
},
"Type": "A",
"Name": "scottstensland.com."
}
}
]
}
To identify value for above field "DNSName" ... after the kubernetes app deploy on aws it responds with a LoadBalancer Ingress as shown in output of cli command :
kubectl describe svc --namespace=ruptureofthemundaneplane
... as in
LoadBalancer Ingress: a0b82c81f47d011e6b98a0a28439e7db-1867261689.us-east-1.elb.amazonaws.com
... even though my goal is to execute a command line call I can do this manually by getting into the aws console browser ... pull up my domain on route53 ...
... In this browser picklist editable text box (circled in green) I noticed the URL gets magically prepended with : dualstack. Previously I was missing that magic string ... so json key "DNSName" wants this
dualstack.a0b82c81f47d011e6b98a0a28439e7db-1867261689.us-east-1.elb.amazonaws.com.
finally execute the change request
aws route53 change-resource-record-sets --hosted-zone-id Z3II3949ZDMDXV --change-batch file://./aws_route53_type_A.json
... output
{
"ChangeInfo": {
"Status": "PENDING",
"Comment": "Update record to reflect new DNSName of fresh deploy",
"SubmittedAt": "2016-07-13T14:53:02.789Z",
"Id": "/change/CFUX5R9XKGE1C"
}
}
.... now to confirm change is live run this to show record
aws route53 list-resource-record-sets --hosted-zone-id Z3II3949ZDMDXV
You can also use external-dns project.
AWS specific setup can be found here
After installation it can be used with an annotation e.g.: external-dns.alpha.kubernetes.io/hostname: nginx.external-dns-test.my-org.com.
Note the IAM permissions needs to be set properly.
I am trying to launch an AWS BeanStalk environment running multi container Docker but it fails with the following list of events:
2016-01-18 16:58:57 UTC+0100 WARN Removed instance [i-a7162d2c] from your environment due to a EC2 health check failure.
2016-01-18 16:57:57 UTC+0100 WARN Environment health has transitioned from Degraded to Severe. None of the instances are sending data.
2016-01-18 16:47:58 UTC+0100 WARN Environment health has transitioned from Pending to Degraded. Command is executing on all instances. Command failed on all instances.
2016-01-18 16:43:58 UTC+0100 INFO Added instance [i-a7162d2c] to your environment.
2016-01-18 16:43:27 UTC+0100 INFO Waiting for EC2 instances to launch. This may take a few minutes.
2016-01-18 16:41:58 UTC+0100 INFO Environment health has transitioned to Pending. There are no instances.
2016-01-18 16:41:54 UTC+0100 INFO Created security group named: awseb-e-ih2exekpvz-stack-AWSEBSecurityGroup-M2O11DNNCJXW
2016-01-18 16:41:54 UTC+0100 INFO Created EIP: 52.48.132.172
2016-01-18 16:41:09 UTC+0100 INFO Using elasticbeanstalk-eu-west-1-936425941972 as Amazon S3 storage bucket for environment data.
2016-01-18 16:41:08 UTC+0100 INFO createEnvironment is starting.
Health status is marked "Severe" and I have the following logs:
98 % of CPU is in use.
Initialization failed at 2016-01-18T15:54:33Z with exit status 1 and error: Hook /opt/elasticbeanstalk/hooks/preinit/02ecs.sh failed.
. /opt/elasticbeanstalk/hooks/common.sh
/opt/elasticbeanstalk/bin/get-config container -k ecs_cluster
EB_CONFIG_ECS_CLUSTER=awseb-figure-test-ih2exekpvz
/opt/elasticbeanstalk/bin/get-config container -k ecs_region
EB_CONFIG_ECS_REGION=eu-west-1
/opt/elasticbeanstalk/bin/get-config container -k support_files_dir
EB_CONFIG_SUPPORT_FILES_DIR=/opt/elasticbeanstalk/containerfiles/support
is_baked ecs_agent
[[ -f /etc/elasticbeanstalk/baking_manifest/ecs_agent ]]
true
aws configure set default.output json
aws configure set default.region eu-west-1
echo ECS_CLUSTER=awseb-figure-test-ih2exekpvz
grep -q 'ecs start/'
initctl status ecs
initctl start ecs
ecs start/running, process 8418
TIMEOUT=120
jq -r .ContainerInstanceArn
curl http://localhost:51678/v1/metadata
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 51678: Connection refused
My configuration:
Environment type: Single Instance
Instance type: Medium
Root volume type: SSD
Root Volume Size: 8GB
Zone: EU-West
My Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "2",
"containerDefinitions": [
{
"essential": true,
"memory": 252,
"links": [
"redis"
],
"mountPoints": [
{
"containerPath": "/srv/express",
"sourceVolume": "_WebExpress"
},
{
"containerPath": "/srv/express/node_modules",
"sourceVolume": "SrvExpressNode_Modules"
}
],
"name": "web",
"image": "figure/web:latest",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
]
},
{
"essential": true,
"memory": 252,
"image": "redis",
"name": "redis",
"portMappings": [
{
"containerPort": 6379,
"hostPort": 6379
}
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "./web/express"
},
"name": "_WebExpress"
},
{
"host": {
"sourcePath": "/srv/express/node_modules"
},
"name": "SrvExpressNode_Modules"
}
]
}
#Kilianc: If you are creating elastic beanstalk application using multi-docker container platform then you have to add the following list of policies to the default role "aws-elasticbeanstalk-ec2-role" or you can create your own role.
AWSElasticBeanstalkWebTier
AWSElasticBeanstalkMulticontainerDocker
AWSElasticBeanstalkWorkerTier
Thanks
It appears I forgot to set up policies and permissions as described in AWS BeanStalk documentation:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecstutorial.html
I am trying to setup an AWS AMI vagrant provision: http://www.packer.io/docs/builders/amazon-ebs.html
I am using the standard .json config:
{
"type": "amazon-instance",
"access_key": "YOUR KEY HERE",
"secret_key": "YOUR SECRET KEY HERE",
"region": "us-east-1",
"source_ami": "ami-d9d6a6b0",
"instance_type": "m1.small",
"ssh_username": "ubuntu",
"account_id": "0123-4567-0890",
"s3_bucket": "packer-images",
"x509_cert_path": "x509.cert",
"x509_key_path": "x509.key",
"x509_upload_path": "/tmp",
"ami_name": "packer-quick-start {{timestamp}}"
}
It connects fine, and I see it create the instance in my AWS account. However, I keep getting Timeout waiting for SSH as an error. What could be causing this problem and how can I resolve it?
As I mentioned in my comment above this is just because sometimes it takes more than a minute for an instance to launch and be SSH ready.
If you want you could set the timeout to be longer - the default timeout with packer is 1 minute.
So you could set it to 5 minutes by adding the following to your json config:
"ssh_timeout": "5m"