I'm moving all the instances under each service from old AWS account into new AWS account. I've found ways to move EC2 and RDS into another account.
To move EC2 instance, I have created an AMI and shared with the new AWS account. Using that image I've created an instance
To move RDS instance, I've created a snapshot and shared with the new AWS account. I've restored the shared snapshot in the new account
Now I need to move Elasticsearch from old account to the new one. I couldn't able to figure out a way to move my Elasticsearch. Can anyone help me on this?
Create a role with Elasticsearch permission. You may also use the existing role with the following trust relationship,
{
"Effect": "Allow",
"Principal": {
"Service": "es.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
Provide the iam:PassRole for the iam user whose access/secret keys will be using to take snapshot.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::accountID:role/TheServiceRole"
}
}
Change the access & secret key, host, region, path, and payload in the below code and execute it.
import requests
from requests_aws4auth import AWS4Auth
AWS_ACCESS_KEY_ID=''
AWS_SECRET_ACCESS_KEY=''
region = 'us-west-1'
service = 'es'
awsauth = AWS4Auth(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, region, service)
host = 'https://elasticsearch-domain.us-west-1.es.amazonaws.com/' # include https:// and trailing /
# REGISTER REPOSITORY
path = '_snapshot/my-snapshot-repo' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "s3-bucket-name",
"region": "us-west-1",
"role_arn": "arn:aws:iam::accountID:role/TheServiceRole"
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers) # requests.get, post, put, and delete all have similar syntax
print(r.text)
To take the snapshot and store it in the S3
path = '_snapshot/my-snapshot-repo/my-snapshot'
url = host + path
r = requests.put(url, auth=awsauth)
print(r.text)
Now the snapshot is ready. Share this snapshot to another account and use the same code with new account keys and endpoint to restore it using the below code snippet.
To restore all indices from the snapshot
path = '_snapshot/my-snapshot-repo/my-snapshot/_restore'
url = host + path
r = requests.post(url, auth=awsauth)
print(r.text)
To restore single index from the snapshot
path = '_snapshot/my-snapshot-repo/my-snapshot/_restore'
url = host + path
payload = {"indices": "my-index"}
headers = {"Content-Type": "application/json"}
r = requests.post(url, auth=awsauth, json=payload, headers=headers)
print(r.text)
Reference: AWS docs.
Related
My goal is to upload objects to S3, I have been trying with both smart_open and boto3 libraries with no success.
I don't know much about configuring IAM policies or Access points in S3; but finding very hard to debug and understand how to pass configurations.
IAM
this is my policy - it should be open and allow PUT. I don't have any access point set.
{
"Version": "2012-10-17",
"Id": "Policy1449009487903",
"Statement": [
{
"Sid": "Stmt1449009478455",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::MY_BUCKET/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://s3-us-west-2.amazonaws.com/MY_BUCKET/*"
]
}
}
}
]
}
boto3
With boto3, I try to open a session, and then upload a file from local disk:
import boto3
session = boto3.Session(
aws_access_key_id = ACCESS_KEY,
aws_secret_access_key = SECRET_KEY,
)
s3 = boto3.resource('s3')
s3.Bucket(S3_BUCKET).upload_file(path_to_my_file_on_disk ,'test.json')
But I got error (very long), which end with:
EndpointConnectionError: Could not connect to the endpoint URL: "https://MY_BUCKET.s3.us-oregon.amazonaws.com/test.json"
Note that the url is different from the URI of an object shared on s3, that should be:
s3://MY_BUCKET/test.json
Looking at :
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html
I just tried:
import boto3
# Print out bucket names
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
And it yields error: fail to connect to:
EndpointConnectionError: Could not connect to the endpoint URL: "https://s3.us-oregon.amazonaws.com/"
Smart_open
I tried with smart_open like this:
with smart_open.open('s3://{}:{}#{}/{}'.format(ACCESS_KEY, SECRET_KEY, S3_BUCKET, filename), 'wb') as o:
o.write(json.dumps(template).encode('utf8'))
But also here, it fails to connect. It does not say why though.
Reading on Stackoverflow, some threads reported that uploading with Smart_open version >= 5.0.0 could be more complicated - see:
https://github.com/RaRe-Technologies/smart_open/blob/develop/howto.md
So I tried:
session = boto3.Session(
aws_access_key_id= ACCESS_KEY,
aws_secret_access_key= SECRET_KEY)
with smart_open.open(
's3://' + S3_BUCKET + '/robots.txt', mode = 'w', transport_params={'client': session.client('s3')}) as o:
o.write("nothing to see here\n")
o.close()
No success
with smart_open.open(
's3://' + S3_BUCKET + '/robots.txt',
'w',
transport_params = {
'client_kwargs': {
'S3.Client.create_multipart_upload': {
'ServerSideEncryption': 'aws:kms'
}
},
'client': boto3.client('s3')
}
) as o:
o.write("nothing to see here\n")
o.close()
no success.
Can you help debug and point to the correct direction ?
I found a solution for boto3:
it turned out I had to specify correct region in the Session:
s3 = boto3.client('s3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
region_name=MY_REGION
)
s3.upload_file(path_to_filename , S3_BUCKET, 'test.json')
worked out.
However, with smart_open I could not find a solution:
Ref.
How to use Python smart_open module to write to S3 with server-side encryption
I tried to specify the boto3 session as above, and then:
session = boto3.Session(
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
region_name=MY_REGION)
client_kwargs = {'S3.Client.create_multipart_upload': {'ServerSideEncryption': 'AES256'}}
with smart_open.open('s3://{}:{}#{}/{}'.format(ACCESS_KEY, SECRET_KEY, S3_BUCKET, filename), 'wb', transport_params={'client_kwargs': client_kwargs}
) as o:
o.write(json.dumps(myfile).encode('utf8'))
Someone can show a correct way for smart_open as well ?
Using 6.3.0 version.
I post this partial answer if someone can find it useful...
.. debugging cumbersome for me, not an expert of AWS IAM either
I have tried everything but couldn't get any clue what's wrong with my IAM policy to do with Cognito sub with identity ID access
I am using Lambda to get authentication details > get_object from a folder separated by Cognito user using boto3.
Here's my Lambda code:
import json
import urllib.parse
import boto3
import sys
import hmac, hashlib, base64
print('Loading function')
cognito = boto3.client('cognito-idp')
cognito_identity = boto3.client('cognito-identity')
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
username = '{substitute_with_my_own_data}' //authenticated user
app_client_id = '{substitute_with_my_own_data}' //cognito client id
key = '{substitute_with_my_own_data}' //cognito app client secret key
cognito_provider = 'cognito-idp.{region}.amazonaws.com/{cognito-pool-id}'
message = bytes(username+app_client_id,'utf-8')
key = bytes(key,'utf-8')
secret_hash = base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode()
print("SECRET HASH:",secret_hash)
auth_data = { 'USERNAME': username, 'PASSWORD':'{substitute_user_password}', 'SECRET_HASH': secret_hash}
auth_response = cognito.initiate_auth(
AuthFlow='USER_PASSWORD_AUTH',
AuthParameters=auth_data,
ClientId=app_client_id
)
print(auth_response)
# From the response that contains the assumed role, get the temporary
# credentials that can be used to make subsequent API calls
auth_result=auth_response['AuthenticationResult']
id_token=auth_result['IdToken']
id_response = cognito_identity.get_id(
IdentityPoolId='{sub_cognito_identity_pool_id}',
Logins={cognito_provider: id_token}
)
print('id_response = ' + id_response['IdentityId']) // up to this stage verified correct user cognito identity id returned
credentials_response = cognito_identity.get_credentials_for_identity(
IdentityId=id_response['IdentityId'],
Logins={cognito_provider: id_token}
)
secretKey = credentials_response['Credentials']['SecretKey']
accessKey = credentials_response['Credentials']['AccessKeyId']
sessionToken = credentials_response['Credentials']['SessionToken']
print('secretKey = ' + secretKey)
print('accessKey = ' + accessKey)
print('sessionToken = ' + sessionToken)
# Use the temporary credentials that AssumeRole returns to make a
# connection to Amazon S3
s3 = boto3.client(
's3',
aws_access_key_id=accessKey,
aws_secret_access_key=secretKey,
aws_session_token=sessionToken,
)
# Use the Amazon S3 resource object that is now configured with the
# credentials to access your S3 buckets.
# for bucket in s3.buckets.all():
# print(bucket.name)
# Get the object from the event and show its content type
bucket = '{bucket-name}'
key = 'abc/{user_cognito_identity_id}/test1.txt'
prefix = 'abc/{user_cognito_identity_id}'
try:
response = s3.get_object(
Bucket=bucket,
Key=key
)
# response = s3.list_objects(
# Bucket=bucket,
# Prefix=prefix,
# Delimiter='/'
# )
print(response)
return response
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
What I have verified:
authentication OK
identity with correct assumed role (printed the cognito identity ID and verified it's the correct authenticated user with the ID)
removed the ${cognito-identity.amazonaws.com:sub} and granted general access to authenticated role > I will be able to get, however the ${cognito-identity.amazonaws.com:sub} seems not able to detect and match well
So it seems that there's issue with the IAM policy
IAM policy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"*/${cognito-identity.amazonaws.com:sub}/*"
]
}
}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/*"
]
}
]
}
I tried listing bucket / get object / put object, all access denied.
I did try playing around with policys such as removing the listbucket condition (obviously it allows access then since i have authenticated) / changing "s3:prefix" to "${cognito-identity.amazonaws.com:sub}/" or "cognito/${cognito-identity.amazonaws.com:sub}/" but can't make anything work.
Same goes for put or get object.
My S3 folder is bucket-name/cognito/{cognito-user-identity-id}/key
I referred to:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_cognito-bucket.html
https://aws.amazon.com/blogs/mobile/understanding-amazon-cognito-authentication-part-3-roles-and-policies/
any insights on where might be wrong?
I managed to resolve this after changing the GetObject and PutObject policy Resources from
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/*"
to
"arn:aws:s3:::bucket-name/*/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/*/${cognito-identity.amazonaws.com:sub}/*"
and it works magically. I don't quite get why cognito would prevent the access since my bucket has cognito prefix after the bucket root, but this is resolved now.
I'm trying to get or list files from an S3 bucket. The bucket is set up as no private access, has no specific permissions added.
I'm trying to access from EC2 configured with a role that has full S3 access, this worked before.
I'm also trying to access from Lambda, configured with a role that has full S3 access, this is new, and never worked before.
According to the IAM simulator this should be allowed.
This is an excerpt from my Lambda (python):
import json
import boto3
from datetime import datetime
def lambda_handler(event, context):
bucket = 'mybucketname' # this the name itself, no url or arn or anything
# check if file exists
s3client = boto3.client('s3')
key = 'mypath/' + 'anotherbitofpath' + '/' + 'index.html'
print(f"key = {key}")
objs = s3client.list_objects_v2(
Bucket=bucket,
Prefix=key
)
print(f"objs = {objs}")
if any([w.key == path_s3 for w in objs]):
print("Exists!")
else:
print("Doesn't exist")
many thanks
I implemented this exact use case. I can access S3 objects from a Lambda function. The only difference is I implemented my code in Java. This method that tags objects works perfectly in a Lambda function.
private void tagExistingObject(S3Client s3, String bucketName, String key, String label, String LabelValue) {
try {
GetObjectTaggingRequest getObjectTaggingRequest = GetObjectTaggingRequest.builder()
.bucket(bucketName)
.key(key)
.build();
GetObjectTaggingResponse response = s3.getObjectTagging(getObjectTaggingRequest);
// Get the existing immutable list - cannot modify this list.
List<Tag> existingList = response.tagSet();
ArrayList<Tag> newTagList = new ArrayList(new ArrayList<>(existingList));
// Create a new tag.
Tag myTag = Tag.builder()
.key(label)
.value(LabelValue)
.build();
// push new tag to list.
newTagList.add(myTag);
Tagging tagging = Tagging.builder()
.tagSet(newTagList)
.build();
PutObjectTaggingRequest taggingRequest = PutObjectTaggingRequest.builder()
.key(key)
.bucket(bucketName)
.tagging(tagging)
.build();
s3.putObjectTagging(taggingRequest);
System.out.println(key + " was tagged with " + label);
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
The role i use has full access to S3 and there are no issues performing S3 operations from a Lambda function.
Update the bucket policy so that it specifies the ARN of the Lambda function's IAM role (execution role) as a Principal that has access to the action s3:GetObject. You can use a bucket policy similar to the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::YourAWSAccount:role/AccountARole"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::YourBucketName/*"
]
}
]
}
I am trying to accomplish the following scenario:
1) Account A uploads a file to an S3 bucket owned by account B. At upload I specify full control for Account owner B
s3_client.upload_file(
local_file,
bucket,
remote_file_name,
ExtraArgs={'GrantFullControl': 'id=<AccountB_CanonicalID>'}
)
2) Account B defines a bucket policy that limits the access to the objects by IP (see below)
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowIPs",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketB/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
<CIDR1>,
<CIDR2>
]
}
}
}
]
}
I get access denied if I try to download the file as anonymous user, even from the specific IP range. If at upload I add public read permission for everyone then I can download the file from any IP.
s3_client.upload_file(
local_file, bucket,
remote_file_name,
ExtraArgs={
'GrantFullControl': 'id=AccountB_CanonicalID', GrantRead':'uri="http://acs.amazonaws.com/groups/global/AllUsers"'
}
)
Question: is it possible to upload the file from Account A to Account B but still restrict public access by an IP range.
This is not possible. According to the documentation:
Bucket Policy – For your bucket, you can add a bucket policy to grant
other AWS accounts or IAM users permissions for the bucket and the
objects in it. Any object permissions apply only to the objects that
the bucket owner creates. Bucket policies supplement, and in many
cases, replace ACL-based access policies.
However, there is a workaround for this scenario. The problem is that the owner of the uploaded file is Account A. We need to upload the file in such a way that the owner of the file is Account B. To accomplish this we need to:
In Account B create a role for trusted entity (select "Another AWS account" and specify Account A). Add upload permission for the bucket.
In Account A create a policy that allows AssumeRole action and as resource specify the ARN of the role created in step 1.
To upload the file from boto3 you can use the following code. Note the use of cachetools to deal with limited TTL of temporary credentials.
import logging
import sys
import time
import boto3
from cachetools import cached, TTLCache
CREDENTIALS_TTL = 1800
credentials_cache = TTLCache(1, CREDENTIALS_TTL - 60)
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(message)s')
logger = logging.getLogger()
def main():
local_file = sys.argv[1]
bucket = '<bucket_from_account_B>'
client = _get_s3_client_for_another_account()
client.upload_file(local_file, bucket, local_file)
logger.info('Uploaded to %s to %s' % (local_file, bucket))
#cached(credentials_cache)
def _get_s3_client_for_another_account():
sts = boto3.client('sts')
response = sts.assume_role(
RoleArn='<arn_of_role_created_in_step_1>',
DurationSeconds=CREDENTIALS_TTL
)
credentials = response['Credentials']
credentials = {
'aws_access_key_id': credentials['AccessKeyId'],
'aws_secret_access_key': credentials['SecretAccessKey'],
'aws_session_token': credentials['SessionToken'],
}
return boto3.client('s3', 'eu-central-1', **credentials)
if __name__ == '__main__':
main()
We're using the AWS SDK (.net) and have successfully uploaded files through our program using PutObjectRequest. I know how to set the ACL permissions on the file once it's created, But when trying to Get the file using GetObjectRequest our application is getting "Access Denied". I realize that I don't know what the userID is for the application that's running. How can I make sure my application has the permissions needed to read the file, without using "public" rights? (setting the ACL on the file to public works for the application).
Is there a way to make the application retrieve a file AS a certain user or group?
Any AWS API call request, need an access key and secret key pair which can be set from:
Hard coded in the app,
AWS configuration file (by default it is in ~/.aws/credentials or C:\Users\*USERNAME*\.aws\credentials), or
Instance role for EC2 instances.
For the #1 and #2, you can check your IAM user permission in https://console.aws.amazon.com/iam/home?region=us-east-1#users.
For #3, you can check your IAM role in https://console.aws.amazon.com/iam/home?region=us-east-1#roles.
Make sure the user/role have enough permission to read your S3 object. You can attach this policy into the user/role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1455021573875",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::your-bucket-name/*"
}
]
}
Adjust the above policy. Your app should have read object access now.
Btw, you can set ACL while creating a resource. You can find the documentation on http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-using-dot-net-sdk.html.
static string bucketName = "*** Provide existing bucket name ***";
static string newBucketName = "*** Provide a name for a new bucket ***";
static string newKeyName = "*** Provide a name for a new key ***";
IAmazonS3 client;
client = new AmazonS3Client(Amazon.RegionEndpoint.USEast1);
// Retrieve ACL from one of the owner's buckets
S3AccessControlList acl = client.GetACL(new GetACLRequest
{
BucketName = bucketName,
}).AccessControlList;
// Describe grant for full control for owner.
S3Grant grant1 = new S3Grant
{
Grantee = new S3Grantee { CanonicalUser = acl.Owner.Id },
Permission = S3Permission.FULL_CONTROL
};
// Describe grant for write permission for the LogDelivery group.
S3Grant grant2 = new S3Grant
{
Grantee = new S3Grantee { URI = "http://acs.amazonaws.com/groups/s3/LogDelivery" },
Permission = S3Permission.WRITE
};
PutBucketRequest request = new PutBucketRequest()
{
BucketName = newBucketName,
BucketRegion = S3Region.US,
Grants = new List<S3Grant> { grant1, grant2 }
};
PutBucketResponse response = client.PutBucket(request);
PutObjectRequest objectRequest = new PutObjectRequest()
{
ContentBody = "Object data for simple put.",
BucketName = newBucketName,
Key = newKeyName,
Grants = new List<S3Grant> { grant1 }
};
PutObjectResponse objectResponse = client.PutObject(objectRequest);