I have an AWS ElasticSearch domain in eu-west-1 region and have taken a snapshot to an S3 bucket sub folder also in the same region.
I have also deployed a second AWS ElasticSearch domain in another aws region - eu-west-2.
Added an S3 bucket replication between the buckets but when I try to register the repository on the eu-west-2 AWS ES domain, I get the following error:
500
{"error":{"root_cause":[{"type":"blob_store_exception","reason":"Failed to check if blob [master.dat] exists"}],"type":"blob_store_exception","reason":"Failed to check if blob [master.dat] exists","caused_by":{"type":"amazon_s3_exception","reason":"Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 14F0571DFF522922; S3 Extended Request ID: U1OnlKPOkfCNFzoV9HC5WBHJ+kfhAZDMOG0j0DzY5+jwaRFJvHkyzBacilA4FdIqDHDYWPCrywU=)"}},"status":500}
this is the code i am using to register the repository on the new cluster (taken from https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-snapshots.html#es-managedomains-snapshot-restore):
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://search-**es-elk-prod**.eu-west-2.es.amazonaws.com/' # include https:// and trailing /
region = 'eu-west-2' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
# Register repository
path = '_snapshot/es-elk-prod' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "es-prod-eu-west-2",
"region": "eu-west-2",
"role_arn": "arn:aws:iam::1234567:role/EsProd-***-snapshotS3role-***"
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers)
print(r.status_code)
print(r.text)
from the logs, i get:
curl -X GET 'https://search-**es-elk-prod**.eu-west-2.es.amazonaws.com/_snapshot/es-mw-elk-prod/_all?pretty'
{
"error" : {
"root_cause" : [
{
"type" : "amazon_s3_exception",
"reason" : "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 72A59132E2830D81; S3 Extended Request ID: o0XalToNp19HDJKSOVxmna71hx3LkwoSFEobm3HQGH1HEzxOrAtYHg+asnKxJ03iGSDDhUz5GUI=)"
}
],
"type" : "amazon_s3_exception",
"reason" : "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 72A59132E2830D81; S3 Extended Request ID: o0XalToNp19HDJKSOVxmna71hx3LkwoSFEobm3HQGH1HEzxOrAtYHg+asnKxJ03iGSDDhUz5GUI=)"
},
"status" : 500
}
the ARN is able to access the S3 bucket as is the same ARN i use to snapshot the eu-west-2 domain to S3 as the eu-west-1 snapshot is stored in a sub-folder on the S3 bucket, I added a path to the code, such that:
payload = {
"type": "s3",
"settings": {
"bucket": "es-prod-eu-west-2",
"path": "es-elk-prod",
"region": "eu-west-2",
"role_arn": "arn:aws:iam::1234567:role/EsProd-***-snapshotS3role-***"
}
}
but this didn't work either.
What is the correct way to restore snapshot created in one aws region into another aws region?
Any advice is much appreciated.
I've had similar but not identical error messages about The bucket is in this region: eu-west-1. Please use this region to retry the request when moving from eu-west-1 to us-west-2.
According to Amazon's documentation (under "Migrating data to a different domain") you will need to specify an endpoint rather than a region:
If you encounter this error, try replacing "region": "us-east-2" with "endpoint": "s3.amazonaws.com" in the PUT statement and retry the request.
Related
When executing this command,I get this error:
C:\WINDOWS\system32>eksctl create cluster --name eksctl-demo --profile myAdmin2
Error: checking AWS STS access – cannot get role ARN for current session: operation error STS: GetCallerIdentity, failed to sign request: failed to retrieve credentials: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request send failed, Get "http://169.254.169.254/latest/meta-data/iam/security-credentials/": dial tcp 169.254.169.254:80: i/o timeout
myAdmin2 IAM users credientials are set up as follows:
Credentials file:
[myAdmin2]
aws_access_key_id = ******************
aws_secret_access_key = ********************
config file:
[profile myAdmin2]
region = us-east-2
output = json
myAdmin2 has access to the console:
C:\WINDOWS\system32>aws iam list-users --profile myAdmin2
{
"Users": [
{
"Path": "/",
"UserName": "myAdmin",
"UserId": "AIDAYYPFV776ELVEJ5ZVQ",
"Arn": "arn:aws:iam::602313981948:user/myAdmin",
"CreateDate": "2022-09-30T19:08:08+00:00"
},
{
"Path": "/",
"UserName": "myAdmin2",
"UserId": "AIDAYYPFV776LEDK2PCCI",
"Arn": "arn:aws:iam::602313981948:user/myAdmin2",
"CreateDate": "2022-09-30T21:39:33+00:00"
}
]
}
I had problems working with myAdmin that's why I created a new IAM user called myAdmin2.
myAdmin2 is granted AdministratorAccess permission:
As shown in this image
aws cli version installed:
C:\WINDOWS\system32>aws --version
aws-cli/2.7.35 Python/3.9.11 Windows/10 exe/AMD64 prompt/off
My Env variables:
C:\WINDOWS\system32>set
AWS_ACCESS_KEY_ID= ***********the same as I have in credentials file
AWS_CONFIG_FILE=~/.aws/config
AWS_DEFAULT_PROFILE=myAdmin2
AWS_DEFAULT_REGION=us-east-2
AWS_PROFILE=myAdmin2
AWS_SECRET_ACCESS_KEY=****************the same as I have in credentials file
AWS_SHARED_CREDENTIALS_FILE=~/.aws/credentials
I think those are all the necessary things I have to mention. If someone can help, please. I can't move on with this error!!
It worked finally! everything was well configured, I just had to reboot my laptop and it resolved the issue!
thanks for greate packages!
I have problem when i create development with localstack using S3 service to create presignedurl post.
I have run localstack with SERVICES=s3 DEBUG=1 S3_SKIP_SIGNATURE_VALIDATION=1 localstack start
I have settings AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test AWS_DEFAULT_REGION=us-east-1 AWS_ENDPOINT_URL=http://localhost:4566 S3_Bucket=my-bucket
I make sure have the bucket
> awslocal s3api list-buckets
{
"Buckets": [
{
"Name": "my-bucket",
"CreationDate": "2021-11-16T08:43:23+00:00"
}
],
"Owner": {
"DisplayName": "webfile",
"ID": "bcaf1ffd86f41161ca5fb16fd081034f"
}
}
I try create presigned url, and running in console with this
s3_client_sync.create_presigned_post(bucket_name=settings.S3_Bucket, object_name="application/test.png", fields={"Content-Type": "image/png"}, conditions=[["Expires", 3600]])
and have return like this
{'url': 'http://localhost:4566/kredivo-thailand',
'fields': {'Content-Type': 'image/png',
'key': 'application/test.png',
'AWSAccessKeyId': 'test',
'policy': 'eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19',
'signature': 'LfFelidjG+aaTOMxHL3fRPCw/xM='}}
And i test using insomnia
and i have read log in localstack
2021-11-16T10:54:04:DEBUG:localstack.services.s3.s3_utils: Received presign S3 URL: http://localhost:4566/my-bucket/application/test.png?AWSAccessKeyId=test&Policy=eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19&Signature=LfFelidjG%2BaaTOMxHL3fRPCw%2FxM%3D&Expires=3600
2021-11-16T10:54:04:WARNING:localstack.services.s3.s3_utils: Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1
2021-11-16T10:54:04:INFO:localstack.services.s3.s3_utils: Presign signature calculation failed: <Response [403]>
what i missing, so i cannot create the presignedurl post ?
The problem is with your AWS configuration -
AWS_ACCESS_KEY_ID=test // Should be an Actual access Key for the IAM user
AWS_SECRET_ACCESS_KEY=test // Should be an Actual Secret Key for the IAM user
AWS_DEFAULT_REGION=us-east-1
AWS_ENDPOINT_URL=http://localhost:4566 // Endpoint seems wrong
S3_Bucket=my-bucket // Actual Bucket Name in AWS S3 console
For more information, try to read here and setup your environment with correct AWS credentials - Setup AWS Credentials
I am trying to upload the image file to one of the bucket over GCS by using python google cloud storage api from the program. I am able to list out the buckets through the program but when uploading image, I am getting a below error. :
google.api_core.exceptions.Forbidden: 403 POST https://storage.googleapis.com/upload/storage/v1/b/projbucket1/o?uploadType=multipart: {
"error": {
"code": 403,
"message": "The account for bucket \"projbucket1\" has not enabled billing.",
"errors": [
{
"message": "The account for bucket \"projbucket1\" has not enabled billing.",
"domain": "global",
"reason": "accountDisabled",
"locationType": "header",
"location": "Authorization"
}
]
}
}
: ('Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>)
where projbucket1 is my bucket where I wanted to upload the file.
I am using below python code for that:
def upload_image(Imagedata, kind):
image_filename = Imagedata.filename
fullpath = os.path.join(app.root_path, 'static/images', image_filename)
bucket_name = "projbucket1"
if kind == "User" :
destination = "projbucket1/UserImages/"+image_filename
else:
destination = "projbucket1/PostImages/"+image_filename
storage_client = storage.Client()
buckets = storage_client.list_buckets()
for bucket in buckets:
print(bucket.name)
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination)
blob.upload_from_filename(fullpath)
I have storage admin and owner permissions for the service account I am using. Please help me in this case.
Thanks,
Pranamya
I have created two elastic search domains - one in us-east-1 and another in us-west-2. I have registered manual snapshot repository in us-east-1 domain and have taken snapshot and the data is in s3 bucket in us-east-1.
How should I go about doing the restoration?
Main questions:
Do I have to do cross-region replication of the s3 bucket to us-west-2, so that everytime a snapshot is taken in us-east-1, it automatically reflects to us-west-2 bucket?
If so, do I have to be in us-west-2 to register manual snapshot repository on the domain and that s3 bucket?
Will the restore API look like this?
curl -XPOST 'elasticsearch-domain-endpoint-us-west-2/_snapshot/repository-name/snapshot-name/_restore'
You don't need to create S3 buckets in several regions. Only one is sufficient. So your S3 repository will be in us-west-2
You need to create the snapshot repository in both of your clusters so that you can access it from both sides. From one cluster you will create snapshots and from the second cluster you'll be able to restore those snapshots.
Yes, that's correct.
1.- No, as Val said you don't need to create S3 buckets in several regions. "all buckets work globally" AWS S3 Bucket with Multiple Regions
2.- Yes you do. You need to create the snapshot repository in both of your clusters.
One repository for create your snapshot to the S3 bucket in us-east-1
And other for your snaphost in us-west-2, in order to read from your destination cluster.
3.- Yes It is.
Additionally, you need to sign your calls to AWS ES to be able to create the repo and to take the snapshot. The best option for me was to use the Python script described below. To restore it is not necessary.
Follow this instructions:
https://medium.com/docsapp-product-and-technology/aws-elasticsearch-manual-snapshot-and-restore-on-aws-s3-7e9783cdaecb and
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-snapshots.html
Create a repository
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://localhost:9999/' # include https:// and trailing / Your elasticsearch endpoint, if you use VPC, you can create a tunnel
region = 'us-east-1' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
path = '_snapshot/yourreponame' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "yourreponame_bucket",
"region": "us-east-1",
"role_arn": "arn:aws:iam::1111111111111:role/AmazonESSnapshotRole" <-- Don't forget to create the AmazonESSnapshotRole
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers, verify=False)
print(r.status_code)
print(r.text)
Create a snapshot
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://localhost:9999/' # include https:// and trailing /
region = 'us-east-1' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
path = '_snapshot/yourreponame/yoursnapshot_name' # the Elasticsearch API endpoint
url = host + path
payload = {
"indices": "*",
"include_global_state": "false",
"ignore_unavailable": "false"
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers, verify=False)
print(r.status_code)
print(r.text)
Restore
Must be called without signing
curl -XPOST -k "https://localhost:9999/_snapshot/yourreponame/yoursnapshot_name/_restore" \
-H "Content-type: application/json" \
-d $'{
"indices": "*",
"ignore_unavailable": false,
"include_global_state": false,
"include_aliases": false
}'
It is highly recommended that the clusters have the same version.
I'm having trouble getting a terraform AWS provider to talk to localstack. Whatever I try I just get the same error:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: dc96c65d-84a7-4e64-947d-833195464538
This error suggest that the provider is making contact with a HTTP server but the credentials are being rejected (as per any 403). You might imagine the problem is that I'm feeding in the wrong credentials (through environment variables).
However the hostname local-aws exists in my /etc/hosts file, but blahblahblah does not. If I swap the endpoint to point to http://blahblahblah:4566 I still get the same 403. So I think the problem is that the provider isn't using my local endpoint. I can't work out why.
resource "aws_secretsmanager_secret_version" "foo" {
secret_id = aws_secretsmanager_secret.foo.id
secret_string = "bar"
}
resource "aws_secretsmanager_secret" "foo" {
name = "rabbitmq_battery_emulator"
}
provider "aws" {
region = "eu-west-2"
endpoints {
secretsmanager = "http://local-aws:4566"
}
}
Firstly check that localstack is configured to run sts. In docker-compose this was just the SERVICES environment variable:
services:
local-aws:
image: localstack/localstack
environment:
EDGE_PORT: 4566
SERVICES: secretsmanager, sts
Then make sure that you set the sts endpoint as well as the service you require:
provider "aws" {
region = "eu-west-2"
endpoints {
sts = "http://local-aws:4566"
secretsmanager = "http://local-aws:4566"
}
}
In addition to the SERVICES and sts endpoint config mentioned by #philip-couling, I also had to remove a terraform block from my main.tf:
#terraform {
# backend "s3" {
# bucket = "valid-bucket"
# key = "terraform/state/account/terraform.tfstate"
# region = "eu-west-1"
# }
# required_providers {
# local = {
# version = "~> 2.1"
# }
# }
#}