Using Packer I create AMI in North Virginia(us-east-1). Below is the builder snippet for it.
"builders": [{
"type": "amazon-ebs",
"access_key": "XXXXXXXXXXXXXXXXXXXXXXX",
"secret_key": "XXXXXXXXXXXXXXXXXXXXXXX",
"region": "us-east-1",
"source_ami": "XXXXXXXXXXXXXXXXXXXXXXX",
"instance_type": "m4.2xlarge",
"ssh_username": "ubuntu",
"ami_users": [
"XXXXXXXXXXXX",
"YYYYYYYYYYYY"
],
"ami_name": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"launch_block_device_mappings": [{
"device_name": "/dev/sda1",
"volume_type": "gp2",
"delete_on_termination": true,
"volume_size": 30
}]
}]
I have no problems in launching this AMI in us-east-1. But when I copy it to Mumbai(ap-south-1) and try to launch it
The instance configuration for this AWS Marketplace product is not supported. Please see the AWS Marketplace site for more information about supported instance types, regions, and operating systems.
Most of the settings are left as default so not sure what is causing this issue. Any pointers will be of great help.
Marketplace AMI cannot be moved between accounts due to license restrictions
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html
You can't copy an AMI with an associated billingProduct code that was
shared with you from another account. This includes Windows AMIs and
AMIs from the AWS Marketplace. To copy a shared AMI with a
billingProduct code, launch an EC2 instance in your account using the
shared AMI and then create an AMI from the instance. For more
information, see Creating an Amazon EBS-Backed Linux AMI.
Related
I am currently configuring my production instances to use AWS Backup Service rather than Lamdba. However, I notice AWS Backup Service does not have the "no reboot" option or anything mentioning that it will not reboot the EC2 instances.
Hence, Will AWS Backup Service restart my EC2 instances during the backup(create AMI) process?
It will not reboot your instance. I checked that using on-demand backup of my instance. Then in CloudTrial I verified that the CreateImage API call made by the backup is set with "noReboot": true:
From CloudTrial event (part shown):
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "xxxx:AWSBackup-AWSBackupDefaultServiceRole",
"arn": "arn:aws:sts::xxxx:assumed-role/AWSBackupDefaultServiceRole/AWSBackup-AWSBackupDefaultServiceRole",
"eventSource": "ec2.amazonaws.com",
"eventName": "CreateImage",
"requestParameters": {
"description": "This image is created by the AWS Backup service.",
"noReboot": true
},
}
I am trying to mount my EFS to a multi-docker Elastic Beanstalk environment using task definition with Dockerrun.aws.json. Also, I have configured the security group of EFS to accept NFS traffic from EC2 (EB environment) security group.
However, I am facing with the error:
ECS task stopped due to: Error response from daemon: create
ecs-awseb-SeyahatciBlog-env-k3k5grsrma-2-wordpress-88eff0a5fc88f9ae7500:
VolumeDriver.Create: mounting volume failed: mount: unknown filesystem
type 'efs'.
I am uploading this Dockerrun.aws.json file using AWS management console:
{
"AWSEBDockerrunVersion": 2,
"authentication": {
"bucket": "seyahatci-docker",
"key": "index.docker.io/.dockercfg"
},
"volumes": [
{
"name": "wordpress",
"efsVolumeConfiguration": {
"fileSystemId": "fs-d9689882",
"rootDirectory": "/blog-web-app/wordpress",
"transitEncryption": "ENABLED"
}
},
{
"name": "mysql-data",
"efsVolumeConfiguration": {
"fileSystemId": "fs-d9689882",
"rootDirectory": "/blog-db/mysql-data",
"transitEncryption": "ENABLED"
}
}
],
"containerDefinitions": [
{
"name": "blog-web-app",
"image": "bireysel/seyehatci-blog-web-app",
"memory": 256,
"essential": false,
"portMappings": [
{"hostPort": 80, "containerPort": 80}
],
"links": ["blog-db"],
"mountPoints": [
{
"sourceVolume": "wordpress",
"containerPath": "/var/www/html"
}
]
},
{
"name": "blog-db",
"image": "mysql:5.7",
"hostname": "blog-db",
"memory": 256,
"essential": true,
"mountPoints": [
{
"sourceVolume": "mysql-data",
"containerPath": "/var/lib/mysql"
}
]
}
]
}
AWS Configuration Screenshots:
EC2 Security Group (Automatically created by EB)
EFS Security Group
EFS networking
My Scenario:
Setup some EC2s with Amazon Linux 2 AMIs.
Try to setup EFS
Had the same error when trying to mount the EFS drive.
It seems like the package WAS NOT included in the Amazon Linux 2 AMI even though the documentation specifies that it should be included.
The amzn-efs-utils package comes preinstalled on Amazon Linux and Amazon Linux 2 Amazon Machine Images (AMIs).
https://docs.aws.amazon.com/efs/latest/ug/overview-amazon-efs-utils.html
Running which amzn-efs-utils returns: no amzn-efs-utils installed.
$ which amzn-efs-utils
/usr/bin/which: no amzn-efs-utils in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin)
Fix
Install the amzn-efs-utils
sudo yum install amazon-efs-utils
After searching the entire web, I didn't encounter any solution for this problem. I contacted with AWS Support. They told me that the issue is with missing "amazon-efs-utils" extension on EC2 instances created by Elastic Beanstalk and then I fixed the error by creating a file named efs.config inside .ebextensions folder:
.ebextensions/efs.config
packages:
yum:
amazon-efs-utils: 1.2
Finally, I zipped the .ebextensions folder and my Dockerrun.aws.json file before uploading and the problem has been resolved.
Currently we are creating instances using a config.json file from EMR to configure the cluster. This file specifies a subnet ("Ec2SubnetId").
ALL of my EMR instances end up using this subnet...how do I let it use multiple subnets?
Here is the terraform template I am pushing to S3.
{
"Applications": [
{"Name": "Spark"},
{"Name": "Hadoop"}
],
"BootstrapActions": [
{
"Name": "Step1-stuff",
"ScriptBootstrapAction": {
"Path": "s3://${artifact_s3_bucket_name}/artifacts/${build_commit_id}/install-stuff.sh",
"Args": ["${stuff_args}"]
}
},
{
"Name": "setup-cloudWatch-agent",
"ScriptBootstrapAction": {
"Path": "s3://${artifact_s3_bucket_name}/artifacts/${build_commit_id}/setup-cwagent-emr.sh",
"Args": ["${build_commit_id}"]
}
}
],
"Configurations": [
{
"Classification": "spark",
"Properties": {
"maximizeResourceAllocation": "true"
}
],
"Instances": {
"AdditionalMasterSecurityGroups": [ "${additional_master_security_group}" ],
"AdditionalSlaveSecurityGroups": [ "${additional_slave_security_group}" ],
"Ec2KeyName": "privatekey-${env}",
"Ec2SubnetId": "${data_subnet}",
"InstanceGroups": [
You cannot currently achieve what you are trying to do. EMR clusters always end up with all of their nodes in the same subnet.
Using Instance Fleets, you are indeed able to configure a set of subnets.. but at launch time, AWS will choose the best one and put all your instances there.
From the EMR Documentation, under "Use the Console to Configure Instance Fleets":
For Network, enter a value. If you choose a VPC for Network, choose a single EC2 Subnet or CTRL + click to choose multiple EC2 subnets. The subnets you select must be the same type (public or private). If you choose only one, your cluster launches in that subnet. If you choose a group, the subnet with the best fit is selected from the group when the cluster launches.
We are using Packer to build images in a GCP compute instance. Packer tries to grab the image based on project and image as follows:
https://www.googleapis.com/compute/v1/projects/<project-name>/global/images/<image-name>?alt=json
Then it throws an error:
oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: dial tcp 108.177.111.84:443: i/o timeout
Based on security principle, our compute instance has no external IP address, therefore it does not have access to internet. In this case, accounts.google.com is no longer accessible. Then how can we authenticate google apis?
I tried to enable firewall rules and provide routes for internet access. But based on the requirement stated here, the instance still won't have access if it does not have external IP address.
This means we must have a separate way to authenticate googleapis.
But does Packer support this?
Here is the packer builder we have:
"builders": [
{
"type": "googlecompute",
"project_id": "test",
"machine_type": "n1-standard-4",
"source_image_family": "{{user `source_family`}}",
"source_image": "{{user `source_image`}}",
"source_image_project_id": "{{user `source_project_id`}}",
"region": "{{user `region`}}",
"zone": "{{user `zone`}}",
"network": "{{user `network`}}",
"subnetwork": "{{user `subnetwork`}}",
"image_name": "test-{{timestamp}}",
"disk_size": 10,
"disk_type": "pd-ssd",
"state_timeout": "5m",
"ssh_username": "build",
"ssh_timeout": "1000s",
"ssh_private_key_file": "./gcp-instance-key.pem",
"service_account_email": "test-account#test-mine.iam.gserviceaccount.com",
"omit_external_ip": true,
"use_internal_ip": true,
"metadata": {
"user": "build"
}
}
To do what you want manually you will need to have an ssh tunnel open on a working compute instance inside the project or in a vpc that has a peering enabled on the network the compute you want to reach is.
If you then use a CI with a runner like gitlab-ci, be sure to create the runner inside the same vpc or in a vpc with a peering.
If you don't want to create a compute with an external ip you could try to open a vpn connection to the project and do it through the vpn.
I'm trying to get my elastic file system (EFS) to be mounted in my docker container so it can be used with AWS batch. Here is what I did:
Create a new AMI that is optimized for Elastic Container Services (ECS). I followed this guide here to make sure it had ECS on it. I also put the mount into /etc/fstab file and verified that my EFS was being mounted (/mnt/efs) after reboot.
Tested an EC2 instance with my new AMI and verified I could pull the docker container and pass it my mount point via
docker run --volume /mnt/efs:/home/efs -it mycontainer:latest
Interactively running the docker image shows me my data inside efs
Set up a new compute enviorment with my new AMI that mounts EFS on boot.
Create a JOB definition File:
{
"jobDefinitionName": "MyJobDEF",
"jobDefinitionArn": "arn:aws:batch:us-west-2:#######:job-definition/Submit:8",
"revision": 8,
"status": "ACTIVE",
"type": "container",
"parameters": {},
"retryStrategy": {
"attempts": 1
},
"containerProperties": {
"image": "########.ecr.us-west-2.amazonaws.com/mycontainer",
"vcpus": 1,
"memory": 100,
"command": [
"ls",
"/home/efs",
],
"volumes": [
{
"host": {
"sourcePath": "/mnt/efs"
},
"name": "EFS"
}
],
"environment": [],
"mountPoints": [
{
"containerPath": "/home/efs",
"readOnly": false,
"sourceVolume": "EFS"
}
],
"ulimits": []
}
}
Run Job, view log
Anyway, while it does not say "no file /home/efs found" it does not list anything in my EFS which is populated, which I'm inerpreting as the container mounting an empty efs. What am I doing wrong? Is my AMI not mounting the EFS in the compute environment?
I covered this in a recent blog post
https://medium.com/arupcitymodelling/lab-note-002-efs-as-a-persistence-layer-for-aws-batch-fcc3d3aabe90
You need to set up a launch template for your batch instances, and you need to make sure that your subnets/security groups are configured properly.