Using Elasticsearch Terraform EC provider to deploy a cluster in AWS - amazon-web-services

I am looking to deploy ECE (Elastic Cloud Enterprise) in AWS with Terraform. Reading through the documentation, I'm still not clear how this model works.
In the provider below, what is the reason for the endpoint? Is terraform connecting to this endpoint with the specified username and password? And are these credentials are being provided with the ECE license?
Hence, I'm thinking that the ECE installation endpoint can't be private. But I need to provision this privately - probably won't be able to do it via Terraform. Any experience with this?
provider "ec" {
# ECE installation endpoint
endpoint = "https://my.ece-environment.corp"
# If the ECE installation has a self-signed certificate
# you must set insecure to true.
insecure = true
username = "my-username"
password = "my-password"
}
data "ec_stack" "latest" {
version_regex = "latest"
region = "us-east-1"
}
resource "ec_deployment" "example_minimal" {
# Optional name.
name = "my_example_deployment"
# Mandatory fields
region = "us-east-1"
version = data.ec_stack.latest.version
deployment_template_id = "aws-io-optimized-v2"
elasticsearch {}
}

Related

Aws java sdk connection with role arn and credential source from profile stored in config file

I am running my java application in ec2 instance and I need to access the cross account resources for which we have listed different profiles with role arn and credential source in config file.
[profile abc]
role_arn = arn:aws:iam::12345678:role/abc-role
credential_source = Ec2InstanceMetadata
[profile xyz]
role_arn = arn:aws:iam::12345678:role/xyz-role
credential_source = Ec2InstanceMetadata
Java code using profile
File configFile = new File(System.getProperty("user.home"), ".aws/config");
System.out.println(configFile.getAbsolutePath());
AWSCredentialsProvider credentialsProvider = new ProfileCredentialsProvider(configFile.getAbsolutePath(), "profile abc");
AWSBatch client = AWSBatchClientBuilder.standard().withCredentials(credentialsProvider).withRegion("us-east-1").build();
Error
com.amazonaws.SdkClientException: Unable to load credentials into profile [default]: AWS Access Key ID is not specified.] with root cause
com.amazonaws.SdkClientException: Unable to load credentials into profile [default]: AWS Access Key ID is not specified.
at com.amazonaws.auth.profile.internal.ProfileStaticCredentialsProvider.fromStaticCredentials(ProfileStaticCredentialsProvider.java:55) ~[aws-java-sdk-core-1.11.470.jar:na]
at com.amazonaws.auth.profile.internal.ProfileStaticCredentialsProvider.(ProfileStaticCredentialsProvider.java:40) ~[aws-java-sdk-core-1.11.470.jar:na]
at com.amazonaws.auth.profile.internal.ProfileAssumeRoleCredentialsProvider.fromAssumeRole(ProfileAssumeRoleCredentialsProvider.java:72) ~[aws-java-sdk-core-1.11.470.jar:na]
at com.amazonaws.auth.profile.internal.ProfileAssumeRoleCredentialsProvider.(ProfileAssumeRoleCredentialsProvider.java:46) ~[aws-java-sdk-core-1.11.470.jar:na]

Migration of data from one elastic search index to another in different region in AWS using manual snapshots

I have created two elastic search domains - one in us-east-1 and another in us-west-2. I have registered manual snapshot repository in us-east-1 domain and have taken snapshot and the data is in s3 bucket in us-east-1.
How should I go about doing the restoration?
Main questions:
Do I have to do cross-region replication of the s3 bucket to us-west-2, so that everytime a snapshot is taken in us-east-1, it automatically reflects to us-west-2 bucket?
If so, do I have to be in us-west-2 to register manual snapshot repository on the domain and that s3 bucket?
Will the restore API look like this?
curl -XPOST 'elasticsearch-domain-endpoint-us-west-2/_snapshot/repository-name/snapshot-name/_restore'
You don't need to create S3 buckets in several regions. Only one is sufficient. So your S3 repository will be in us-west-2
You need to create the snapshot repository in both of your clusters so that you can access it from both sides. From one cluster you will create snapshots and from the second cluster you'll be able to restore those snapshots.
Yes, that's correct.
1.- No, as Val said you don't need to create S3 buckets in several regions. "all buckets work globally" AWS S3 Bucket with Multiple Regions
2.- Yes you do. You need to create the snapshot repository in both of your clusters.
One repository for create your snapshot to the S3 bucket in us-east-1
And other for your snaphost in us-west-2, in order to read from your destination cluster.
3.- Yes It is.
Additionally, you need to sign your calls to AWS ES to be able to create the repo and to take the snapshot. The best option for me was to use the Python script described below. To restore it is not necessary.
Follow this instructions:
https://medium.com/docsapp-product-and-technology/aws-elasticsearch-manual-snapshot-and-restore-on-aws-s3-7e9783cdaecb and
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-snapshots.html
Create a repository
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://localhost:9999/' # include https:// and trailing / Your elasticsearch endpoint, if you use VPC, you can create a tunnel
region = 'us-east-1' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
path = '_snapshot/yourreponame' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "yourreponame_bucket",
"region": "us-east-1",
"role_arn": "arn:aws:iam::1111111111111:role/AmazonESSnapshotRole" <-- Don't forget to create the AmazonESSnapshotRole
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers, verify=False)
print(r.status_code)
print(r.text)
Create a snapshot
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://localhost:9999/' # include https:// and trailing /
region = 'us-east-1' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
path = '_snapshot/yourreponame/yoursnapshot_name' # the Elasticsearch API endpoint
url = host + path
payload = {
"indices": "*",
"include_global_state": "false",
"ignore_unavailable": "false"
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers, verify=False)
print(r.status_code)
print(r.text)
Restore
Must be called without signing
curl -XPOST -k "https://localhost:9999/_snapshot/yourreponame/yoursnapshot_name/_restore" \
-H "Content-type: application/json" \
-d $'{
"indices": "*",
"ignore_unavailable": false,
"include_global_state": false,
"include_aliases": false
}'
It is highly recommended that the clusters have the same version.

How to create a secret in Google Cloud Secret Manager by Terraform?

This is official page: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/secret_manager_secret
I created these files:
variables.tf
variable gcp_project {
type = string
}
main.tf
resource "google_secret_manager_secret" "my_password" {
provider = google-beta
secret_id = "my-password"
replication {
automatic = true
}
}
data "google_secret_manager_secret_version" "my_password_v1" {
provider = google-beta
project = var.gcp_project
secret = google_secret_manager_secret.my_password.secret_id
version = 1
}
outputs.tf
output my_password_version {
value = data.google_secret_manager_secret_version.my_password_v1.version
}
When apply it, got error:
Error: Error retrieving available secret manager secret versions: googleapi: Error 404: Secret Version [projects/2381824501/secrets/my-password/versions/1] not found.
So I created the secret by gcloud cli:
echo -n "my_secret_password" | gcloud secrets create "my-password" \
--data-file - \
--replication-policy "automatic"
Then apply terraform again, it said Error: project: required field is not set.
If use terraform to create a secret with a real value, how to do?
I found the following article that I consider to be useful on Managing Secret Manager with Terraform.
You have to:
Create the Setup
Create a file named versions.tf that define the version constraints.
Create a file named main.tf and configure the Google provider stanza:
This is the code for creating a Secret Manager secret named "my-secret" with an automatic replication policy:
resource "google_secret_manager_secret" "my-secret" {
provider = google-beta
secret_id = "my-secret"
replication {
automatic = true
}
depends_on = [google_project_service.secretmanager]
}
Following #marian.vladoi's answer, if you're having issues with cloud resource manager api, enable it like so:
resource "google_project_service" "cloudresourcemanager" {
service = "cloudresourcemanager.googleapis.com"
}
You can also enable the cloud resource manager api using this gcloud command in terminal:
gcloud services enable secretmanager.googleapis.com

Terraform AWS not accessing localstack

I'm having trouble getting a terraform AWS provider to talk to localstack. Whatever I try I just get the same error:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: dc96c65d-84a7-4e64-947d-833195464538
This error suggest that the provider is making contact with a HTTP server but the credentials are being rejected (as per any 403). You might imagine the problem is that I'm feeding in the wrong credentials (through environment variables).
However the hostname local-aws exists in my /etc/hosts file, but blahblahblah does not. If I swap the endpoint to point to http://blahblahblah:4566 I still get the same 403. So I think the problem is that the provider isn't using my local endpoint. I can't work out why.
resource "aws_secretsmanager_secret_version" "foo" {
secret_id = aws_secretsmanager_secret.foo.id
secret_string = "bar"
}
resource "aws_secretsmanager_secret" "foo" {
name = "rabbitmq_battery_emulator"
}
provider "aws" {
region = "eu-west-2"
endpoints {
secretsmanager = "http://local-aws:4566"
}
}
Firstly check that localstack is configured to run sts. In docker-compose this was just the SERVICES environment variable:
services:
local-aws:
image: localstack/localstack
environment:
EDGE_PORT: 4566
SERVICES: secretsmanager, sts
Then make sure that you set the sts endpoint as well as the service you require:
provider "aws" {
region = "eu-west-2"
endpoints {
sts = "http://local-aws:4566"
secretsmanager = "http://local-aws:4566"
}
}
In addition to the SERVICES and sts endpoint config mentioned by #philip-couling, I also had to remove a terraform block from my main.tf:
#terraform {
# backend "s3" {
# bucket = "valid-bucket"
# key = "terraform/state/account/terraform.tfstate"
# region = "eu-west-1"
# }
# required_providers {
# local = {
# version = "~> 2.1"
# }
# }
#}

AWS Elasticsearch IAM as master user getting AuthorizationException trying to put data

Screenshot of ES Role Selection console
Trying to put a document to AWS ES cluster. Code:
from elasticsearch import Elasticsearch, RequestsHttpConnection
from requests_aws4auth import AWS4Auth
import boto3
host = 'search-dev-operations-2-XXXXXXXX.us-east-2.es.amazonaws.com' # For example, my-test-domain.us-east-1.es.amazonaws.com
region = 'us-east-2' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
es = Elasticsearch(
hosts = [{'host': host, 'port': 443}],
http_auth = awsauth,
use_ssl = True,
verify_certs = True,
connection_class = RequestsHttpConnection
)
document = {
"title": "Moneyball",
"director": "Bennett Miller",
"year": "2011"
}
es.index(index="dev-operations-2", doc_type="_doc", id="5", body=document)
print(es.get(index="dev-operations-2", doc_type="_doc", id="5"))
Getting this error message:
elasticsearch.exceptions.AuthorizationException: AuthorizationException(403, '{"Message":"User: arn:aws:iam::XXXXXX:user/andrey.tantsuyev#XXXtechnology.com is not authorized to perform: es:ESHttpPut with an explicit deny"}')
Set up arn:aws:iam::XXXXXX:user/andrey.tantsuyev#XXXtechnology.com as a IAM master user through Fine-grained access. This is my AWS user
Anybody could help me please? Have no ideas why I"m not authorized.
Screenshot of ES Cluster details
This is not a problem in ElasticSearch, this is being blocked based on the policies associated to your IAM user.
Go to the IAM service console and look up the permissions for the andrey.tantsuyev#XXXtechnology.com user. It appears that there is a "Deny" statement associated with one of the groups/policies attached to the user that matches the es:ESHttpPut action.
The problem was that andrey.tantsuyev#XXXtechnology.com had MFA restrictions. Ones I've implemented assumeRole with MFA credentials everything started working fine.