I'm trying to create an ECS cluster and then proceed to launch an EC2 instance into that cluster. However this is not happening.
My code:
ecs_client = boto3.client(
'ecs',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
region_name=region
)
ec2_client = boto3.client(
'ec2',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
region_name=region
)
response = ecs_client.create_cluster(
clusterName=cluster_name
)
response = ec2_client.run_instances(
# Use the official ECS image
ImageId="ami-0128839b21d19300e",
MinCount=1,
MaxCount=1,
InstanceType="t2.micro",
IamInstanceProfile={
"Name": "ecsInstanceRole"
},
UserData="#!/bin/bash \n echo ECS_CLUSTER=" + cluster_name + " >> /etc/ecs/ecs.config"
)
the ecsInstanceRole
From what I've read, the UserData should make this possible but it is not at the moment.
I tried to replicate you issue in us-east-1, but your boto3 code works fine. I had no problems creating a cluster and launching an instance using your boto3 script to that cluster. You code, by default will launch an instance in a default VPC.
Thus, the fault must be outside of the code provided. A possible cases could be misconstrued default VPC, custom changes to ecsInstanceRole role permissions or lack connectivity to ECS service
Related
I have created two elastic search domains - one in us-east-1 and another in us-west-2. I have registered manual snapshot repository in us-east-1 domain and have taken snapshot and the data is in s3 bucket in us-east-1.
How should I go about doing the restoration?
Main questions:
Do I have to do cross-region replication of the s3 bucket to us-west-2, so that everytime a snapshot is taken in us-east-1, it automatically reflects to us-west-2 bucket?
If so, do I have to be in us-west-2 to register manual snapshot repository on the domain and that s3 bucket?
Will the restore API look like this?
curl -XPOST 'elasticsearch-domain-endpoint-us-west-2/_snapshot/repository-name/snapshot-name/_restore'
You don't need to create S3 buckets in several regions. Only one is sufficient. So your S3 repository will be in us-west-2
You need to create the snapshot repository in both of your clusters so that you can access it from both sides. From one cluster you will create snapshots and from the second cluster you'll be able to restore those snapshots.
Yes, that's correct.
1.- No, as Val said you don't need to create S3 buckets in several regions. "all buckets work globally" AWS S3 Bucket with Multiple Regions
2.- Yes you do. You need to create the snapshot repository in both of your clusters.
One repository for create your snapshot to the S3 bucket in us-east-1
And other for your snaphost in us-west-2, in order to read from your destination cluster.
3.- Yes It is.
Additionally, you need to sign your calls to AWS ES to be able to create the repo and to take the snapshot. The best option for me was to use the Python script described below. To restore it is not necessary.
Follow this instructions:
https://medium.com/docsapp-product-and-technology/aws-elasticsearch-manual-snapshot-and-restore-on-aws-s3-7e9783cdaecb and
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-snapshots.html
Create a repository
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://localhost:9999/' # include https:// and trailing / Your elasticsearch endpoint, if you use VPC, you can create a tunnel
region = 'us-east-1' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
path = '_snapshot/yourreponame' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "yourreponame_bucket",
"region": "us-east-1",
"role_arn": "arn:aws:iam::1111111111111:role/AmazonESSnapshotRole" <-- Don't forget to create the AmazonESSnapshotRole
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers, verify=False)
print(r.status_code)
print(r.text)
Create a snapshot
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://localhost:9999/' # include https:// and trailing /
region = 'us-east-1' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
path = '_snapshot/yourreponame/yoursnapshot_name' # the Elasticsearch API endpoint
url = host + path
payload = {
"indices": "*",
"include_global_state": "false",
"ignore_unavailable": "false"
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers, verify=False)
print(r.status_code)
print(r.text)
Restore
Must be called without signing
curl -XPOST -k "https://localhost:9999/_snapshot/yourreponame/yoursnapshot_name/_restore" \
-H "Content-type: application/json" \
-d $'{
"indices": "*",
"ignore_unavailable": false,
"include_global_state": false,
"include_aliases": false
}'
It is highly recommended that the clusters have the same version.
I'm trying to revoke vpn client ingress rule on 'destroy' in Terrafrom. Everything worked fine with terraform 0.12
Unfortunately, after upgrading to version 0.14, the same method no longer works.
Here is what I have:
resource "null_resource" "client_vpn_ingress" {
provisioner "local-exec" {
when = create
command = "aws ec2 authorize-client-vpn-ingress --client-vpn-endpoint-id ${aws_ec2_client_vpn_endpoint.vpn_endpoint.id} --target-network-cidr ${var.vpc_cidr_block} --authorize-all-groups --region ${var.aws_region} --profile ${var.profile}"
}
provisioner "local-exec" {
when = destroy
command = "aws ec2 revoke-client-vpn-ingress --client-vpn-endpoint-id ${aws_ec2_client_vpn_endpoint.vpn_endpoint.id} --target-network-cidr ${var.vpc_cidr_block} --revoke-all-groups --region ${var.aws_region} --profile ${var.profile}"
}
}
and here is the error message:
Error: Invalid reference from destroy provisioner
on vpn_client_endpoint.tf line 84, in resource "null_resource"
"client_vpn_ingress": 84: command = "aws ec2
revoke-client-vpn-ingress --client-vpn-endpoint-id
${aws_ec2_client_vpn_endpoint.vpn_endpoint.id} --target-network-cidr
${var.vpc_cidr_block} --revoke-all-groups --region ${var.aws_region}
--profile ${var.profile}"
Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self',
'count.index', or 'each.key'.
References to other resources during the destroy phase can cause
dependency cycles and interact poorly with create_before_destroy.
Unfortunately I'm no longer able to use Terraform 0.12
Does anyone have any idea how to revoke it on 'terraform destroy' in version >= 0.14 ?
As of version 2.70.0 of the Terraform AWS provider (see this GitHub comment), you can now do something like this:
resource "aws_ec2_client_vpn_authorization_rule" "vpn_auth_rule" {
depends_on = [
aws_ec2_client_vpn_endpoint.vpn
]
client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.vpn.id
target_network_cidr = "0.0.0.0/0"
authorize_all_groups = true
}
This way the ingress rule will be handled as a first-class resource in state by Terraform and you won't have to worry about if/when the command is executed.
Hi I am working on aws cdk. I am trying to get existing non-default vpc. I tried below options.
vpc = ec2.Vpc.from_lookup(self, id = "VPC", vpc_id='vpcid', vpc_name='vpc-dev')
This results in below error
[Error at /LocationCdkStack-cdkstack] Request has expired.
[Warning at /LocationCdkStack-cdkstack/TaskDef/mw-service] Proper policies need to be attached before pulling from ECR repository, or use 'fromEcrRepository'.
Found errors
Other method I tried is
vpc = ec2.Vpc.from_vpc_attributes(self, 'VPC', vpc_id='vpc-839227e7', availability_zones=['ap-southeast-2a','ap-southeast-2b','ap-southeast-2c'])
This results in
[Error at /LocationCdkStack-cdkstack] Request has expired.
[Warning at /LocationCdkStack-cdkstack/TaskDef/mw-service] Proper policies need to be attached before pulling from ECR repository, or use 'fromEcrRepository'.
Found errors
Other method I tried is
vpc = ec2.Vpc.from_lookup(self, id = "VPC", is_default=True) // This will get default vpc and this will work
Can someone help me to get non-default vpc in aws cdk? Any help would be appreciated. Thanks
Take a look at aws_cdk.aws_ec2 documentation and at CDK Runtime Context.
If your VPC is created outside your CDK app, you can use
Vpc.fromLookup(). The CDK CLI will search for the specified VPC in the
the stack’s region and account, and import the subnet configuration.
Looking up can be done by VPC ID, but more flexibly by searching for a
specific tag on the VPC.
Usage:
# Example automatically generated. See https://github.com/aws/jsii/issues/826
from aws_cdk.core import App, Stack, Environment
from aws_cdk import aws_ec2 as ec2
# Information from environment is used to get context information
# so it has to be defined for the stack
stack = MyStack(
app, "MyStack", env=Environment(account="account_id", region="region")
)
# Retrieve VPC information
vpc = ec2.Vpc.from_lookup(stack, "VPC",
# This imports the default VPC but you can also
# specify a 'vpcName' or 'tags'.
is_default=True
)
Update with a relevant example:
vpc = ec2.Vpc.from_lookup(stack, "VPC",
vpc_id = VPC_ID
)
Update with typescript example:
import ec2 = require('#aws-cdk/aws-ec2');
const getExistingVpc = ec2.Vpc.fromLookup(this, 'ImportVPC',{isDefault: true});
More info here.
For AWS CDK v2 or v1(latest), You can use:
// You can either use vpcId OR vpcName and fetch the desired vpc
const getExistingVpc = ec2.Vpc.fromLookup(this, 'ImportVPC',{
vpcId: "VPC_ID",
vpcName: "VPC_NAME"
});
here is simple example
//get VPC Info form AWS account, FYI we are not rebuilding we are referencing
const DefaultVpc = Vpc.fromVpcAttributes(this, 'vpcdev', {
vpcId:'vpc-d0e0000b0',
availabilityZones: core.Fn.getAzs(),
privateSubnetIds: 'subnet-00a0de00',
publicSubnetIds: 'subnet-00a0de00'
});
const yourService = new lambda.Function(this, 'SomeName', {
code: lambda.Code.fromAsset("lambda"),
handler: 'handlers.your_handler',
role: lambdaExecutionRole,
securityGroup: lambdaSecurityGroup,
vpc: DefaultVpc,
runtime: lambda.Runtime.PYTHON_3_7,
timeout: Duration.minutes(2),
});
We can do it easily using ec2.vpc.fromLookup.
https://kuchbhilearning.blogspot.com/2022/10/httpskuchbhilearning.blogspot.comimport-existing-vpc-in-aws-cdk.html
The following dictates how to use the method.
Is there a way to filter instances by IAM role?
Basically I want a script that terminates all the instances that I've launched, but doesn't touch instances launched with other IAM roles.
Method 1:
If it is just a one-time activity, you can consider using aws-cli itself.
Use the below aws-cli command to list all instances with a particular IAM Role.
aws ec2 describe-instances --region us-east-1 --query 'Reservations[*].Instances[?IamInstanceProfile.Arn==`<Enter you Instance Profile ARN here>`].{InstanceId: InstanceId}' --output text
Replace <Enter you Instance Profile ARN here> with the Instance Profile Arn.
NOTE:
You must enter the Instance Profile Arn and NOT the Role ARN.
Instance Profile Arn will be of the form:
arn:aws:iam::xxxxxxxxxxxx:instance-profile/Profile-ASDNSDLKJ
You can then pass the list of Instance-id's returned above to the terminate-instance cli command. The instance-ids must be separated by spaces.
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0 i-1234567890jkefpq1
Method 2:
import boto3
client = boto3.client('ec2',region_name='us-east-1')
response = client.describe_instances(
Filters=[
{
'Name': 'iam-instance-profile.arn',
'Values': [
'arn:aws:iam::1234567890:instance-profile/MyProfile-ASDNSDLKJ',
]
},
]
)
terminate_instance_list = []
for resp in response['Reservations']:
for inst in resp['Instances']:
#print inst['InstanceId']
terminate_instance_list.append(inst['InstanceId'])
#print(terminate_instance_list)
if terminate_instance_list:
response = client.terminate_instances(
InstanceIds=terminate_instance_list
)
print(response)
After reading this question How to SSH and run commands in EC2 using boto3?
I try to use SSM to automatically run the command on EC2 instance. However, when I write code like this
def excute_command_on_instance(client, command, instance_id):
response = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': command},
InstanceIds=instance_id,
)
return response
# Using SSM in boto3 to send command to EC2 instances.
ssm_client = boto3.client('ssm')
commands = ['echo "hello world']
instance_id = running_instance[0:1]
excute_command_on_instance(ssm_client, commands, instance_id)
It reminds me that
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:iam::62771xxxx946:user/Python_CloudComputing is not authorized to perform: ssm:SendCommand on resource: arn:aws:ec2:eu-west-2:6277xxxx3946:instance/i-074f862c3xxxxfc07
.
After I use SST to generate credentials for client and I got the code as below.
def excute_command_on_instance(client, command, instance_id):
response = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': command},
InstanceIds=instance_id,
)
return response
# Using SSM in boto3 to send command to EC2 instances.
sts = boto3.client('sts')
sts_response = sts.get_session_token()
ACCESS_KEY = sts_response['Credentials']['AccessKeyId']
SECRET_KEY = sts_response['Credentials']['SecretAccessKey']
ssm_client = boto3.client(
'ssm',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
)
commands = ['echo "hello world']
instance_id = running_instance[0:1]
excute_command_on_instance(ssm_client, commands, instance_id)
However, this time it reminds me that
botocore.exceptions.ClientError: An error occurred (UnrecognizedClientException) when calling the SendCommand operation: The security token included in the request is invalid.
Can anybody tell me how to solve this problem?
You are missing permissions for the IAM user or the Role to access SSM.
You are also trying to use STS to get access which is over complicating what you need to do. The policy that STS needs to assume needs the same permissions. There are many good cases for using STS (the rule of least privilege), but I don't think you need STS here.
Amazon provides predefined policies for SSM that you can quickly add to a policy or role such as:
AmazonEC2RoleForSSM
AmazonSSMFullAccess
AmazonSSMReadOnlyAccess
This link will help you configure access to Systems Manager:
Configuring Access to Systems Manager