After reading this question How to SSH and run commands in EC2 using boto3?
I try to use SSM to automatically run the command on EC2 instance. However, when I write code like this
def excute_command_on_instance(client, command, instance_id):
response = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': command},
InstanceIds=instance_id,
)
return response
# Using SSM in boto3 to send command to EC2 instances.
ssm_client = boto3.client('ssm')
commands = ['echo "hello world']
instance_id = running_instance[0:1]
excute_command_on_instance(ssm_client, commands, instance_id)
It reminds me that
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:iam::62771xxxx946:user/Python_CloudComputing is not authorized to perform: ssm:SendCommand on resource: arn:aws:ec2:eu-west-2:6277xxxx3946:instance/i-074f862c3xxxxfc07
.
After I use SST to generate credentials for client and I got the code as below.
def excute_command_on_instance(client, command, instance_id):
response = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': command},
InstanceIds=instance_id,
)
return response
# Using SSM in boto3 to send command to EC2 instances.
sts = boto3.client('sts')
sts_response = sts.get_session_token()
ACCESS_KEY = sts_response['Credentials']['AccessKeyId']
SECRET_KEY = sts_response['Credentials']['SecretAccessKey']
ssm_client = boto3.client(
'ssm',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
)
commands = ['echo "hello world']
instance_id = running_instance[0:1]
excute_command_on_instance(ssm_client, commands, instance_id)
However, this time it reminds me that
botocore.exceptions.ClientError: An error occurred (UnrecognizedClientException) when calling the SendCommand operation: The security token included in the request is invalid.
Can anybody tell me how to solve this problem?
You are missing permissions for the IAM user or the Role to access SSM.
You are also trying to use STS to get access which is over complicating what you need to do. The policy that STS needs to assume needs the same permissions. There are many good cases for using STS (the rule of least privilege), but I don't think you need STS here.
Amazon provides predefined policies for SSM that you can quickly add to a policy or role such as:
AmazonEC2RoleForSSM
AmazonSSMFullAccess
AmazonSSMReadOnlyAccess
This link will help you configure access to Systems Manager:
Configuring Access to Systems Manager
Related
When I access S3 from CLI, it returns the buckets.
aws s3 ls druid-s3-bucket
PRE pccpdevint/
PRE test-druid/
2022-03-15 21:41:36 4 tes
But from the SDK, it fails with the error
Unable to load credentials into profile [druidbotint]: AWS Access Key ID is not specified., com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#2ff8d560: Failed to connect to service endpoint: , com.amazonaws.auth.InstanceProfileCredentialsProvider#3cad42b9: Failed to connect to service endpoint: ]
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:136) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1266) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:842) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:792) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713) ~[aws-java-sdk-core-1.12.37.jar:?]
I have a custom role and I have an external process to look up credentials
Here is the aws/credential file
[default]
source_profile = druidbotint
role_arn = arn:aws:iam::1233:role/worker-role
role_session_name = druidbotsession
[druidbotint]
credential_process = awsconnect -u druid_s -a 1233 -r custom_role -p conf
Here is the aws/config file
[profile conf]
region = us-west-2
Any idea what could be going wrong?
I have created two elastic search domains - one in us-east-1 and another in us-west-2. I have registered manual snapshot repository in us-east-1 domain and have taken snapshot and the data is in s3 bucket in us-east-1.
How should I go about doing the restoration?
Main questions:
Do I have to do cross-region replication of the s3 bucket to us-west-2, so that everytime a snapshot is taken in us-east-1, it automatically reflects to us-west-2 bucket?
If so, do I have to be in us-west-2 to register manual snapshot repository on the domain and that s3 bucket?
Will the restore API look like this?
curl -XPOST 'elasticsearch-domain-endpoint-us-west-2/_snapshot/repository-name/snapshot-name/_restore'
You don't need to create S3 buckets in several regions. Only one is sufficient. So your S3 repository will be in us-west-2
You need to create the snapshot repository in both of your clusters so that you can access it from both sides. From one cluster you will create snapshots and from the second cluster you'll be able to restore those snapshots.
Yes, that's correct.
1.- No, as Val said you don't need to create S3 buckets in several regions. "all buckets work globally" AWS S3 Bucket with Multiple Regions
2.- Yes you do. You need to create the snapshot repository in both of your clusters.
One repository for create your snapshot to the S3 bucket in us-east-1
And other for your snaphost in us-west-2, in order to read from your destination cluster.
3.- Yes It is.
Additionally, you need to sign your calls to AWS ES to be able to create the repo and to take the snapshot. The best option for me was to use the Python script described below. To restore it is not necessary.
Follow this instructions:
https://medium.com/docsapp-product-and-technology/aws-elasticsearch-manual-snapshot-and-restore-on-aws-s3-7e9783cdaecb and
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-snapshots.html
Create a repository
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://localhost:9999/' # include https:// and trailing / Your elasticsearch endpoint, if you use VPC, you can create a tunnel
region = 'us-east-1' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
path = '_snapshot/yourreponame' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "yourreponame_bucket",
"region": "us-east-1",
"role_arn": "arn:aws:iam::1111111111111:role/AmazonESSnapshotRole" <-- Don't forget to create the AmazonESSnapshotRole
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers, verify=False)
print(r.status_code)
print(r.text)
Create a snapshot
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://localhost:9999/' # include https:// and trailing /
region = 'us-east-1' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
path = '_snapshot/yourreponame/yoursnapshot_name' # the Elasticsearch API endpoint
url = host + path
payload = {
"indices": "*",
"include_global_state": "false",
"ignore_unavailable": "false"
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers, verify=False)
print(r.status_code)
print(r.text)
Restore
Must be called without signing
curl -XPOST -k "https://localhost:9999/_snapshot/yourreponame/yoursnapshot_name/_restore" \
-H "Content-type: application/json" \
-d $'{
"indices": "*",
"ignore_unavailable": false,
"include_global_state": false,
"include_aliases": false
}'
It is highly recommended that the clusters have the same version.
I'm trying to create an ECS cluster and then proceed to launch an EC2 instance into that cluster. However this is not happening.
My code:
ecs_client = boto3.client(
'ecs',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
region_name=region
)
ec2_client = boto3.client(
'ec2',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
region_name=region
)
response = ecs_client.create_cluster(
clusterName=cluster_name
)
response = ec2_client.run_instances(
# Use the official ECS image
ImageId="ami-0128839b21d19300e",
MinCount=1,
MaxCount=1,
InstanceType="t2.micro",
IamInstanceProfile={
"Name": "ecsInstanceRole"
},
UserData="#!/bin/bash \n echo ECS_CLUSTER=" + cluster_name + " >> /etc/ecs/ecs.config"
)
the ecsInstanceRole
From what I've read, the UserData should make this possible but it is not at the moment.
I tried to replicate you issue in us-east-1, but your boto3 code works fine. I had no problems creating a cluster and launching an instance using your boto3 script to that cluster. You code, by default will launch an instance in a default VPC.
Thus, the fault must be outside of the code provided. A possible cases could be misconstrued default VPC, custom changes to ecsInstanceRole role permissions or lack connectivity to ECS service
Screenshot of ES Role Selection console
Trying to put a document to AWS ES cluster. Code:
from elasticsearch import Elasticsearch, RequestsHttpConnection
from requests_aws4auth import AWS4Auth
import boto3
host = 'search-dev-operations-2-XXXXXXXX.us-east-2.es.amazonaws.com' # For example, my-test-domain.us-east-1.es.amazonaws.com
region = 'us-east-2' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
es = Elasticsearch(
hosts = [{'host': host, 'port': 443}],
http_auth = awsauth,
use_ssl = True,
verify_certs = True,
connection_class = RequestsHttpConnection
)
document = {
"title": "Moneyball",
"director": "Bennett Miller",
"year": "2011"
}
es.index(index="dev-operations-2", doc_type="_doc", id="5", body=document)
print(es.get(index="dev-operations-2", doc_type="_doc", id="5"))
Getting this error message:
elasticsearch.exceptions.AuthorizationException: AuthorizationException(403, '{"Message":"User: arn:aws:iam::XXXXXX:user/andrey.tantsuyev#XXXtechnology.com is not authorized to perform: es:ESHttpPut with an explicit deny"}')
Set up arn:aws:iam::XXXXXX:user/andrey.tantsuyev#XXXtechnology.com as a IAM master user through Fine-grained access. This is my AWS user
Anybody could help me please? Have no ideas why I"m not authorized.
Screenshot of ES Cluster details
This is not a problem in ElasticSearch, this is being blocked based on the policies associated to your IAM user.
Go to the IAM service console and look up the permissions for the andrey.tantsuyev#XXXtechnology.com user. It appears that there is a "Deny" statement associated with one of the groups/policies attached to the user that matches the es:ESHttpPut action.
The problem was that andrey.tantsuyev#XXXtechnology.com had MFA restrictions. Ones I've implemented assumeRole with MFA credentials everything started working fine.
Is there a way to filter instances by IAM role?
Basically I want a script that terminates all the instances that I've launched, but doesn't touch instances launched with other IAM roles.
Method 1:
If it is just a one-time activity, you can consider using aws-cli itself.
Use the below aws-cli command to list all instances with a particular IAM Role.
aws ec2 describe-instances --region us-east-1 --query 'Reservations[*].Instances[?IamInstanceProfile.Arn==`<Enter you Instance Profile ARN here>`].{InstanceId: InstanceId}' --output text
Replace <Enter you Instance Profile ARN here> with the Instance Profile Arn.
NOTE:
You must enter the Instance Profile Arn and NOT the Role ARN.
Instance Profile Arn will be of the form:
arn:aws:iam::xxxxxxxxxxxx:instance-profile/Profile-ASDNSDLKJ
You can then pass the list of Instance-id's returned above to the terminate-instance cli command. The instance-ids must be separated by spaces.
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0 i-1234567890jkefpq1
Method 2:
import boto3
client = boto3.client('ec2',region_name='us-east-1')
response = client.describe_instances(
Filters=[
{
'Name': 'iam-instance-profile.arn',
'Values': [
'arn:aws:iam::1234567890:instance-profile/MyProfile-ASDNSDLKJ',
]
},
]
)
terminate_instance_list = []
for resp in response['Reservations']:
for inst in resp['Instances']:
#print inst['InstanceId']
terminate_instance_list.append(inst['InstanceId'])
#print(terminate_instance_list)
if terminate_instance_list:
response = client.terminate_instances(
InstanceIds=terminate_instance_list
)
print(response)