I'm running the following command:
aws ec2 create-volume --region eu-west-1 --availability-zone eu-west-1a --snapshot-id snap-02583b4b1fb1d2d84
And I get the response:
{
"SnapshotId": "snap-02583b4b1fb1d2d84",
"Size": 40,
"VolumeType": "standard",
"Encrypted": true,
"State": "creating",
"VolumeId": "vol-0d7bec77ac1164266",
"CreateTime": "2017-05-09T10:17:03.521Z",
"AvailabilityZone": "eu-west-1a"
}
However, any subsequent command; such as:
aws ec2 wait volume-available --volume-ids vol-0d7bec77ac1164266
Returns:
Waiter VolumeAvailable failed: The volume 'vol-0d7bec77ac1164266' does not exist.
When I look on the web UI volumes dashboard; I cannot see the volume. I have checked in every region.
Anyone ever seen this behaviour before?
UPDATE
The command appears to work as expected if I execute it on another computer using a different IAM user with * permissions on * resources.
UPDATE 2
It appears that the volume is encrypted and the permissions may not be compensating for an encrypted volume.
I had this same issue attempting to create a Volume from an encrypted Snapshot. The create volume command appears to complete successfully:
{
"AvailabilityZone": "eu-west-1a",
"Encrypted": true,
"VolumeType": "gp2",
"VolumeId": "vol-xxxxx",
"State": "creating",
"Iops": 100,
"SnapshotId": "snap-xxxxx",
"CreateTime": "2017-07-14T21:03:24.430Z",
"Size": 10
}
However, the Volume never appears to be created and cannot be found attempting to describe it.
The issue was that the KMS key the Snapshot was created with was pending deletion. By cancelling the KMS deletion and re-enabling the key the Volume was created successfully.
OK. I have got it working; but I still believe that there is an issue with the CLI.
The volume being created is encrypted. Therefore permissions are needed to handle encrypted volumes.
The only way I could get it to work was to add the following permissions:
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant",
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey",
"ec2:AttachVolume",
"ec2:DescribeVolume*",
"ec2:Describe*",
"ec2:CreateVolume",
"ec2:DescribeSnapshots",
"ec2:AttachVolume"
The tool doesn't report any permission errors when executing the command on an encrypted volume.
I've raised the issue: https://github.com/aws/aws-cli/issues/2592
I was unable to reproduce your problem.
I did the following steps:
Created an IAM user with these permissions as an inline policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Action": [
"ec2:CreateVolume"
],
"Resource": [
"*"
]
}
]
}
Created a volume from a snapshot (as that User):
aws ec2 create-volume --region ap-southeast-2 --availability-zone ap-southeast-2a --snapshot-id snap-abcd1234 --profile xyz
{
"AvailabilityZone": "ap-southeast-2a",
"Encrypted": false,
"VolumeType": "standard",
"VolumeId": "vol-03e2eee8e64f593d3",
"State": "creating",
"SnapshotId": "snap-abcd1234",
"CreateTime": "2017-05-11T05:17:23.960Z",
"Size": 30
}
The volume was created successfully.
That user doesn't have permissions to call VolumeAvailable, so I used a different user:
aws ec2 wait volume-available --volume-ids vol-03e2eee8e64f593d3 --region ap-southeast-2 --profile normal
The command completed immediately without error.
Related
I have private snapshots in one account (source) that I have shared with another account (target). I am able to see the snapshots themselves from the target account, but the tags are not available, neither on the console nor via the cli. This makes it impossible to filter for a desired snapshot from the target account. For background, the user in the target account has the following policy in effect:
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*"
Here's an example of what I'm seeing; from the source account:
$ aws --region us-east-2 ec2 describe-snapshots --snapshot-ids snap-XXXXX
{
"Snapshots": [
{
"Description": "snapshot for testing",
"VolumeSize": 50,
"Tags": [
{
"Value": "test-snapshot",
"Key": "Name"
}
],
"Encrypted": true,
"VolumeId": "vol-XXXXX",
"State": "completed",
"KmsKeyId": "arn:aws:kms:us-east-2:XXXXX:key/mrk-XXXXX",
"StartTime": "2022-04-19T18:29:36.069Z",
"Progress": "100%",
"OwnerId": "XXXXX",
"SnapshotId": "snap-XXXXX"
}
]
}
but from the target account
$ aws --region us-east-2 ec2 describe-snapshots --owner-ids 012345678900 --snapshot-ids snap-11111111111111111
{
"Snapshots": [
{
"Description": "snapshot for testing",
"VolumeSize": 50,
"Encrypted": true,
"VolumeId": "vol-22222222222222222",
"State": "completed",
"KmsKeyId": "arn:aws:kms:us-east-2:012345678900:key/mrk-00000000000000000000000000000000",
"StartTime": "2022-04-19T18:29:36.069Z",
"Progress": "100%",
"OwnerId": "012345678900",
"SnapshotId": "snap-11111111111111111"
}
]
}
Any ideas on what's going on here?
Cheers!
From https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions
When you tag public or shared resources, the tags you assign are
available only to your AWS account; no other AWS account will have
access to those tags. For tag-based access control to shared
resources, each AWS account must assign its own set of tags to control
access to the resource.
I am running a Docker image on an ECS cluster to shell into it and run some simple tests. However when I run this:
aws ecs execute-command \
--cluster MyEcsCluster \
--task $ECS_TASK_ARN \
--container MainContainer \
--command "/bin/bash" \
--interactive
I get the error:
The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
An error occurred (TargetNotConnectedException) when calling the ExecuteCommand operation: The execute command failed due to an internal error. Try again later.
I can confirm the task + container + agent are all running:
aws ecs describe-tasks \
--cluster MyEcsCluster \
--tasks $ECS_TASK_ARN \
| jq '.'
"containers": [
{
"containerArn": "<redacted>",
"taskArn": "<redacted>",
"name": "MainContainer",
"image": "confluentinc/cp-kafkacat",
"runtimeId": "<redacted>",
"lastStatus": "RUNNING",
"networkBindings": [],
"networkInterfaces": [
{
"attachmentId": "<redacted>",
"privateIpv4Address": "<redacted>"
}
],
"healthStatus": "UNKNOWN",
"managedAgents": [
{
"lastStartedAt": "2021-09-20T16:26:44.540000-05:00",
"name": "ExecuteCommandAgent",
"lastStatus": "RUNNING"
}
],
"cpu": "0",
"memory": "4096"
}
],
I'm defining the ECS Cluster and Task Definition with the CDK Typescript code:
new Cluster(stack, `MyEcsCluster`, {
vpc,
clusterName: `MyEcsCluster`,
})
const taskDefinition = new FargateTaskDefinition(stack, TestTaskDefinition`, {
family: `TestTaskDefinition`,
cpu: 512,
memoryLimitMiB: 4096,
})
taskDefinition.addContainer("MainContainer", {
image: ContainerImage.fromRegistry("confluentinc/cp-kafkacat"),
command: ["tail", "-F", "/dev/null"],
memoryLimitMiB: 4096,
// Some internet searches suggested setting this flag. This didn't seem to help.
readonlyRootFilesystem: false,
})
ECS Exec Checker should be able to figure out what's wrong with your setup. Can you give it a try?
The check-ecs-exec.sh script allows you to check and validate both your CLI environment and ECS cluster/task are ready for ECS Exec, by calling various AWS APIs on behalf of you.
Building on #clay's comment
I was also missing ssmmessages:* permissions.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html#ecs-exec-required-iam-permissions says a policy such as
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
}
]
}
should be attached to the role used in your "task role" (not for the "task execution role"), although the sole ssmmessages:CreateDataChannel permission does cut it.
The managed policies
arn:aws:iam::aws:policy/AmazonSSMFullAccess
arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
arn:aws:iam::aws:policy/AmazonSSMManagedEC2InstanceDefaultPolicy
arn:aws:iam::aws:policy/AWSCloud9SSMInstanceProfile
all contain the necessary permissions, AWSCloud9SSMInstanceProfile being the most minimalistic.
I have an S3 bucket which contains my ova file. The file name does not contain space, etc.
The S3 bucket is in my default region. I have created the role and trusting policy as described in https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html#import-image-prereqs
Command used:
I call the following command to start the import:
aws ec2 import-image --description "IBM QRadar CE 733" --license-type BYOL --disk-containers file://containers.json
{
"Description": "IBM QRadar CE 733",
"ImportTaskId": "import-ami-xxxxxxxxxxxx",
"LicenseType": "BYOL",
"Progress": "1",
"SnapshotDetails": [
{
"Description": "QRadarCE733",
"DiskImageSize": 0.0,
"Format": "OVA",
"UserBucket": {
"S3Bucket": "ibmqradarce733",
"S3Key": "QRadarCE733GA_v1_0.ova"
}
}
],
"Status": "active",
"StatusMessage": "pending"
}
container.json contains:
[{
"Description": "QRadarCE733",
"Format": "OVA",
"UserBucket": {
"S3Bucket": "ibmqradarce733",
"S3Key": "QRadarCE733GA_v1_0.ova"
}
}]
Progress check
Please note: I have added xxx to the ImportTaskId.
Already after a few seconds in the "Validation"-Phase I receive the error:
ClientError: Disk validation failed [We do not have access to the given resource. Reason 403 Forbidden]
Here is the full response:
(Please note: I have added xxx to the ImportTaskId)
aws ec2 describe-import-image-tasks --import-task-ids import-ami-0a09ee6b0e35d8ca0
{
"ImportImageTasks": [
{
"Description": "IBM QRadar CE 733",
"ImportTaskId": "import-ami-xxxxxxxxxxxxx",
"LicenseType": "BYOL",
"SnapshotDetails": [],
"Status": "deleting",
"StatusMessage": "ClientError: Disk validation failed [We do not have access to the given resource. Reason 403 Forbidden]",
"Tags": []
}
]
}
Make sure the vmimport policy attached to vmimport role allows access to the S3 bucket containing your .ova files.
If you copied the policy from the documentation verbatim, you will need to edit it to explicitly grant access to your S3 buckets.
This section:
"Resource": [
"arn:aws:s3:::disk-image-file-bucket",
"arn:aws:s3:::disk-image-file-bucket/*"
]
Should become:
"Resource": [
"arn:aws:s3:::ibmqradarce733",
"arn:aws:s3:::ibmqradarce733/*"
]
give full public access for Bucket & file (object)
I'm pretty new at working with AWS and I'm just experimenting and trying to learn. So I have an EC2 instance with an IAM role attached. I also have an EFS filesystem with the below policy in place. My intent was to restrict mounting the access point to EC2 instances with the IAM role attached.
But when I try to mount from the EC2 instance I get access denied.
mount.nfs4: access denied by server while mounting 127.0.0.1:
If I change the principal to "AWS" : "*" I can mount the access point. According to the docs I can specify the IAM role used by the EC2 instance as the principal but it doesn't seem to work.
I suspect my problem is somehow with the role I have attached to the EC2 instance. The role has EFS client actions but when I look at the role in the IAM console and check access adviser, it says the role is never accessed. So I may be doing something fundamentally wrong.
{
"Version": "2020-08-08",
"Id": "access-point-www",
"Statement": [
{
"Sid": "access-point-webstorage",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345678:role/wwwservers"
},
"Action": [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite"
],
"Resource": "arn:aws:elasticfilesystem:us-east-1:12345678:file-system/fs-987654da",
"Condition": {
"StringEquals": {
"elasticfilesystem:AccessPointArn": "arn:aws:elasticfilesystem:us-east-1:12345678:access-point/fsap-01ffffbfb38217bcd"
}
}
}
]
}
Did you enable IAM mounting? Otherwise AWS tries to mount the EFS volume as a anonymous principle.
For EC2, like your case, you might just provide -o iam as option to your call to mount.
See: https://docs.amazonaws.cn/en_us/efs/latest/ug/efs-mount-helper.html#mounting-IAM-option
For ECS/task definitions this can be done this way:
Like this here:
aws_ecs_task_definition.volume.efs_volume_configuration.authorization_config?
resource "aws_ecs_task_definition" "service" {
family = "something"
container_definitions = file("something.json")
volume {
name = "service-storage"
efs_volume_configuration {
file_system_id = aws_efs_file_system.efs[0].id
root_directory = "/"
transit_encryption = "ENABLED"
authorization_config {
iam = "ENABLED"
}
}
}
}
iam - (Optional) Whether or not to use the Amazon ECS task IAM role defined in a task definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the EFSVolumeConfiguration. Valid values: ENABLED, DISABLED. If this parameter is omitted, the default value of DISABLED is used.
This will help you if you have errors in your CloudTrail that an anonymous principal tries to mount your EFS. Errors would look something like this then:
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AWSAccount",
"principalId": "",
"accountId": "ANONYMOUS_PRINCIPAL"
},
"eventSource": "elasticfilesystem.amazonaws.com",
"eventName": "NewClientConnection",
"sourceIPAddress": "AWS Internal",
"userAgent": "elasticfilesystem",
"errorCode": "AccessDenied",
"readOnly": true,
"resources": [
{
"accountId": "XXXXXX",
"type": "AWS::EFS::FileSystem",
"ARN": "arn:aws:elasticfilesystem:eu-west-1:XXXXXX:file-system/YYYYYY"
}
],
"eventType": "AwsServiceEvent",
"managementEvent": true,
"eventCategory": "Management",
"recipientAccountId": "XXXXXX",
"sharedEventID": "ZZZZZZZZ",
"serviceEventDetails": {
"permissions": {
"ClientRootAccess": false,
"ClientMount": false,
"ClientWrite": false
},
"sourceIpAddress": "nnnnnnn"
}
}
Note: "principalId": "", and "accountId": "ANONYMOUS_PRINCIPAL"
I'm a beginner in using AWS.
I just want to stop and start several EC2 instances automatically and periodically(Not reboot).
Is there any recommended way to do this?
Amazon recently (Feb 2018) released the EC2 instance scheduler tool:
The AWS Instance Scheduler is a simple AWS-provided solution that
enables customers to easily configure custom start and stop schedules
for their Amazon Elastic Compute Cloud (Amazon EC2) and Amazon
Relational Database Service (Amazon RDS) instances. The solution is
easy to deploy and can help reduce operational costs for both
development and production environments. Customers who use this
solution to run instances during regular business hours can save up to
70% compared to running those instances 24 hours a day.
I had this up and running in my account in 15 minutes; very simple to use, and practically free.
https://aws.amazon.com/answers/infrastructure-management/instance-scheduler/
AWS has a good doc explaining how you can achieve this using Lambda and Cloudwatch events. You can refer it - https://aws.amazon.com/premiumsupport/knowledge-center/start-stop-lambda-cloudwatch/
This solution can be modified to get the EC2 list dynamically, or operate on a set of instances which can be identified based on a certain tag.
Yes, you can do that using AWS Lambda. You can select the trigger in Cloudwatch which runs on Cron expressions on UTC.
Here is a related link https://aws.amazon.com/premiumsupport/knowledge-center/start-stop-lambda-cloudwatch/
Another alternative is to use awscli which is available from pip, apt-get, yum or brew, and then running aws configure with your credentials from IAM and executing the following bash script, to stop an EC2 that has been tagged with Name: Appname and Value: Appname Prod. You can use awscli to tag your instances or tag it manually from the AWS console. aws ec2 stop-instances will stop the instance and jq is used to filter the json query and fetch the correct instance id using the tags from aws ec2 describe-instances.
To verify that aws configure was successful and returns json output run aws ec2 describe-instances and your running instance id should be there in the output. Here is a sample output
{
"Reservations": [
{
"Instances": [
{
"Monitoring": {
"State": "disabled"
},
"PublicDnsName": "ec2-xxx.ap-south-1.compute.amazonaws.com",
"State": {
"Code": xx,
"Name": "running"
},
"EbsOptimized": false,
"LaunchTime": "20xx-xx-xxTxx:16:xx.000Z",
"PublicIpAddress": "xx.127.24.xxx",
"PrivateIpAddress": "xxx.31.3.xxx",
"ProductCodes": [],
"VpcId": "vpc-aaxxxxx",
"StateTransitionReason": "",
"InstanceId": "i-xxxxxxxx",
"ImageId": "ami-xxxxxxx",
"PrivateDnsName": "ip-xxxx.ap-south-1.compute.internal",
"KeyName": "node",
"SecurityGroups": [
{
"GroupName": "xxxxxx",
"GroupId": "sg-xxxx"
}
],
"ClientToken": "",
"SubnetId": "subnet-xxxx",
"InstanceType": "t2.xxxxx",
"NetworkInterfaces": [
{
"Status": "in-use",
"MacAddress": "0x:xx:xx:xx:xx:xx",
"SourceDestCheck": true,
"VpcId": "vpc-xxxxxx",
"Description": "",
"NetworkInterfaceId": "eni-xxxx",
"PrivateIpAddresses": [
{
"PrivateDnsName": "ip-xx.ap-south-1.compute.internal",
"PrivateIpAddress": "xx.31.3.xxx",
"Primary": true,
"Association": {
"PublicIp": "xx.127.24.xxx",
"PublicDnsName": "ec2-xx.ap-south-1.compute.amazonaws.com",
"IpOwnerId": "xxxxx"
}
}
],
"PrivateDnsName": "ip-xxx-31-3-xxx.ap-south-1.compute.internal",
"Attachment": {
"Status": "attached",
"DeviceIndex": 0,
"DeleteOnTermination": true,
"AttachmentId": "xxx",
"AttachTime": "20xx-xx-30Txx:16:xx.000Z"
},
"Groups": [
{
"GroupName": "xxxx",
"GroupId": "sg-xxxxx"
}
],
"Ipv6Addresses": [],
"OwnerId": "xxxx",
"PrivateIpAddress": "xx.xx.xx.xxx",
"SubnetId": "subnet-xx",
"Association": {
"PublicIp": "xx.xx.xx.xxx",
"PublicDnsName": "ec2-xx.ap-south-1.compute.amazonaws.com",
"IpOwnerId": "xxxx"
}
}
],
"SourceDestCheck": true,
"Placement": {
"Tenancy": "default",
"GroupName": "",
"AvailabilityZone": "xx"
},
"Hypervisor": "xxx",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xxx",
"Ebs": {
"Status": "attached",
"DeleteOnTermination": true,
"VolumeId": "vol-xxx",
"AttachTime": "20xxx-xx-xxTxx:16:xx.000Z"
}
}
],
"Architecture": "x86_64",
"RootDeviceType": "ebs",
"RootDeviceName": "/dev/xxx",
"VirtualizationType": "xxx",
"Tags": [
{
"Value": "xxxx centxx",
"Key": "Name"
}
],
"AmiLaunchIndex": 0
}
],
"ReservationId": "r-xxxx",
"Groups": [],
"OwnerId": "xxxxx"
}
]
}
The following bash script is stop-ec2.sh in /home/centos/cron-scripts/
(instance=$(aws ec2 describe-instances | jq '.Reservations[].Instances | select(.[].Tags[].Value | startswith("Appname Prod") ) | select(.[].Tags[].Key == "Appname") | {InstanceId: .[].InstanceId, PublicDnsName: .[].PublicDnsName, State: .[].State, LaunchTime: .[].LaunchTime, Tags: .[].Tags} | [.]' | jq -r .[].InstanceId) && aws ec2 stop-instances --instance-ids ${instance} )
Run the file using sh /home/centos/cron-scripts/stop-ec2.sh and verify that the EC2 instance gets stopped. To debug run aws ec2 describe-instances | jq '.Reservations[].Instances | select(.[].Tags[].Value | startswith("Appname Prod") ) | select(.[].Tags[].Key == "Appname") | {InstanceId: .[].InstanceId, PublicDnsName: .[].PublicDnsName, State: .[].State, LaunchTime: .[].LaunchTime, Tags: .[].Tags} | [.]' | jq -r .[].InstanceId and see that it returns the correct instance ID which has been tagged.
Then in crontab -e the following line can be added
30 14 * * * sh /home/centos/cron-scripts/stop-ec2.sh >> /tmp/stop
which will log the output to /tmp/stop. The 30 14 * * * is the UTC cron expression that you can check in https://crontab.guru/
Lambda script to stop instance:
import json
import boto3
# Enter the region your instances are in. Include only the region without specifying Availability Zone; e.g., 'us-east-1'
region = 'us-east-1'
def lambda_handler(event, context):
ec2 = boto3.client('ec2', region_name=region)
filter = [{'Name': 'tag:Name', 'Values': ['****-env']}] //give instance name here in place of ****-env
instances = ec2.describe_instances(Filters=filter)
#ec2.stop_instances(InstanceIds=instances)
stop_instance = instances.get('Reservations')[0].get('Instances')[0].get('InstanceId')
stop_instances = []
stop_instances.append(stop_instance)
ec2.stop_instances(InstanceIds=stop_instances)
Lambda script to start instance:
import json
import boto3
# Enter the region your instances are in. Include only the region without specifying Availability Zone; e.g., 'us-east-1'
region = 'us-east-1'
def lambda_handler(event, context):
ec2 = boto3.client('ec2', region_name=region)
filter = [{'Name': 'tag:Name', 'Values': ['****-env']}]
instances = ec2.describe_instances(Filters=filter)
#ec2.stop_instances(InstanceIds=instances)
start_instance = instances.get('Reservations')[0].get('Instances')[0].get('InstanceId')
start_instances = []
start_instances.append(start_instance)
ec2.start_instances(InstanceIds=start_instances)
ASG scheduler is the best and easiest option to mange the EC2 instance if you are using ASG. If not using ASG, then you can use either AWS instance scheduler CF solution or lambda with cloudwatch Cron event.