When I access S3 from CLI, it returns the buckets.
aws s3 ls druid-s3-bucket
PRE pccpdevint/
PRE test-druid/
2022-03-15 21:41:36 4 tes
But from the SDK, it fails with the error
Unable to load credentials into profile [druidbotint]: AWS Access Key ID is not specified., com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#2ff8d560: Failed to connect to service endpoint: , com.amazonaws.auth.InstanceProfileCredentialsProvider#3cad42b9: Failed to connect to service endpoint: ]
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:136) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1266) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:842) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:792) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753) ~[aws-java-sdk-core-1.12.37.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713) ~[aws-java-sdk-core-1.12.37.jar:?]
I have a custom role and I have an external process to look up credentials
Here is the aws/credential file
[default]
source_profile = druidbotint
role_arn = arn:aws:iam::1233:role/worker-role
role_session_name = druidbotsession
[druidbotint]
credential_process = awsconnect -u druid_s -a 1233 -r custom_role -p conf
Here is the aws/config file
[profile conf]
region = us-west-2
Any idea what could be going wrong?
Related
I am running my java application in ec2 instance and I need to access the cross account resources for which we have listed different profiles with role arn and credential source in config file.
[profile abc]
role_arn = arn:aws:iam::12345678:role/abc-role
credential_source = Ec2InstanceMetadata
[profile xyz]
role_arn = arn:aws:iam::12345678:role/xyz-role
credential_source = Ec2InstanceMetadata
Java code using profile
File configFile = new File(System.getProperty("user.home"), ".aws/config");
System.out.println(configFile.getAbsolutePath());
AWSCredentialsProvider credentialsProvider = new ProfileCredentialsProvider(configFile.getAbsolutePath(), "profile abc");
AWSBatch client = AWSBatchClientBuilder.standard().withCredentials(credentialsProvider).withRegion("us-east-1").build();
Error
com.amazonaws.SdkClientException: Unable to load credentials into profile [default]: AWS Access Key ID is not specified.] with root cause
com.amazonaws.SdkClientException: Unable to load credentials into profile [default]: AWS Access Key ID is not specified.
at com.amazonaws.auth.profile.internal.ProfileStaticCredentialsProvider.fromStaticCredentials(ProfileStaticCredentialsProvider.java:55) ~[aws-java-sdk-core-1.11.470.jar:na]
at com.amazonaws.auth.profile.internal.ProfileStaticCredentialsProvider.(ProfileStaticCredentialsProvider.java:40) ~[aws-java-sdk-core-1.11.470.jar:na]
at com.amazonaws.auth.profile.internal.ProfileAssumeRoleCredentialsProvider.fromAssumeRole(ProfileAssumeRoleCredentialsProvider.java:72) ~[aws-java-sdk-core-1.11.470.jar:na]
at com.amazonaws.auth.profile.internal.ProfileAssumeRoleCredentialsProvider.(ProfileAssumeRoleCredentialsProvider.java:46) ~[aws-java-sdk-core-1.11.470.jar:na]
When executing this command,I get this error:
C:\WINDOWS\system32>eksctl create cluster --name eksctl-demo --profile myAdmin2
Error: checking AWS STS access – cannot get role ARN for current session: operation error STS: GetCallerIdentity, failed to sign request: failed to retrieve credentials: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request send failed, Get "http://169.254.169.254/latest/meta-data/iam/security-credentials/": dial tcp 169.254.169.254:80: i/o timeout
myAdmin2 IAM users credientials are set up as follows:
Credentials file:
[myAdmin2]
aws_access_key_id = ******************
aws_secret_access_key = ********************
config file:
[profile myAdmin2]
region = us-east-2
output = json
myAdmin2 has access to the console:
C:\WINDOWS\system32>aws iam list-users --profile myAdmin2
{
"Users": [
{
"Path": "/",
"UserName": "myAdmin",
"UserId": "AIDAYYPFV776ELVEJ5ZVQ",
"Arn": "arn:aws:iam::602313981948:user/myAdmin",
"CreateDate": "2022-09-30T19:08:08+00:00"
},
{
"Path": "/",
"UserName": "myAdmin2",
"UserId": "AIDAYYPFV776LEDK2PCCI",
"Arn": "arn:aws:iam::602313981948:user/myAdmin2",
"CreateDate": "2022-09-30T21:39:33+00:00"
}
]
}
I had problems working with myAdmin that's why I created a new IAM user called myAdmin2.
myAdmin2 is granted AdministratorAccess permission:
As shown in this image
aws cli version installed:
C:\WINDOWS\system32>aws --version
aws-cli/2.7.35 Python/3.9.11 Windows/10 exe/AMD64 prompt/off
My Env variables:
C:\WINDOWS\system32>set
AWS_ACCESS_KEY_ID= ***********the same as I have in credentials file
AWS_CONFIG_FILE=~/.aws/config
AWS_DEFAULT_PROFILE=myAdmin2
AWS_DEFAULT_REGION=us-east-2
AWS_PROFILE=myAdmin2
AWS_SECRET_ACCESS_KEY=****************the same as I have in credentials file
AWS_SHARED_CREDENTIALS_FILE=~/.aws/credentials
I think those are all the necessary things I have to mention. If someone can help, please. I can't move on with this error!!
It worked finally! everything was well configured, I just had to reboot my laptop and it resolved the issue!
I have created two elastic search domains - one in us-east-1 and another in us-west-2. I have registered manual snapshot repository in us-east-1 domain and have taken snapshot and the data is in s3 bucket in us-east-1.
How should I go about doing the restoration?
Main questions:
Do I have to do cross-region replication of the s3 bucket to us-west-2, so that everytime a snapshot is taken in us-east-1, it automatically reflects to us-west-2 bucket?
If so, do I have to be in us-west-2 to register manual snapshot repository on the domain and that s3 bucket?
Will the restore API look like this?
curl -XPOST 'elasticsearch-domain-endpoint-us-west-2/_snapshot/repository-name/snapshot-name/_restore'
You don't need to create S3 buckets in several regions. Only one is sufficient. So your S3 repository will be in us-west-2
You need to create the snapshot repository in both of your clusters so that you can access it from both sides. From one cluster you will create snapshots and from the second cluster you'll be able to restore those snapshots.
Yes, that's correct.
1.- No, as Val said you don't need to create S3 buckets in several regions. "all buckets work globally" AWS S3 Bucket with Multiple Regions
2.- Yes you do. You need to create the snapshot repository in both of your clusters.
One repository for create your snapshot to the S3 bucket in us-east-1
And other for your snaphost in us-west-2, in order to read from your destination cluster.
3.- Yes It is.
Additionally, you need to sign your calls to AWS ES to be able to create the repo and to take the snapshot. The best option for me was to use the Python script described below. To restore it is not necessary.
Follow this instructions:
https://medium.com/docsapp-product-and-technology/aws-elasticsearch-manual-snapshot-and-restore-on-aws-s3-7e9783cdaecb and
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-snapshots.html
Create a repository
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://localhost:9999/' # include https:// and trailing / Your elasticsearch endpoint, if you use VPC, you can create a tunnel
region = 'us-east-1' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
path = '_snapshot/yourreponame' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "yourreponame_bucket",
"region": "us-east-1",
"role_arn": "arn:aws:iam::1111111111111:role/AmazonESSnapshotRole" <-- Don't forget to create the AmazonESSnapshotRole
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers, verify=False)
print(r.status_code)
print(r.text)
Create a snapshot
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://localhost:9999/' # include https:// and trailing /
region = 'us-east-1' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
path = '_snapshot/yourreponame/yoursnapshot_name' # the Elasticsearch API endpoint
url = host + path
payload = {
"indices": "*",
"include_global_state": "false",
"ignore_unavailable": "false"
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers, verify=False)
print(r.status_code)
print(r.text)
Restore
Must be called without signing
curl -XPOST -k "https://localhost:9999/_snapshot/yourreponame/yoursnapshot_name/_restore" \
-H "Content-type: application/json" \
-d $'{
"indices": "*",
"ignore_unavailable": false,
"include_global_state": false,
"include_aliases": false
}'
It is highly recommended that the clusters have the same version.
I am building some resources in 1 AWS account with terraform, but I am storing terraform state remotely on another AWS account.
Example of terraform code:
provider "aws" {
region = "us-east-1"
profile = "AWS account 1"
}
terraform {
backend "s3" {
bucket = "bucket name"
region = "us-east-1"
key = "terraform.tfstate"
dynamodb_table = "locks"
profile = "AWS account 2"
}
}
with .aws/credentials
[AWS account 1]
aws_access_key_id=33333333333333
aws_secret_access_key=44444444444444444444
[AWS account 2]
aws_access_key_id=111111111111
aws_secret_access_key=22222222222222222222
When I am NOT using SSO to login to AWS and just using access and secret keys in .aws/credentials everything worked fine.
However, when I change to login into AWS with SSO and assume roles, things got ugly with .aws/config (with sso .aws/config should be used)
—
[okta]
aws_saml_url = [some aws_saml_url]
[profile AWS account 1-umbrella]
role_arn = arn:aws:iam::[some accountID]:role/[some role_name]
region = us-west-1
[profile AWS account 1]
role_arn = arn:aws:iam::[some accountID]:role/[some role_name]
source_profile = AWS account 1-umbrella
region = us-west-1
[profile AWS account 2-umbrella]
role_arn = arn:aws:iam::[some accountID]:role/[some role_name]
region = us-west-1
[profile AWS account 2]
role_arn = arn:aws:iam::[some accountID]:role/[some role_name]
source_profile = AWS account 1-umbrella
region = us-west-1
I am getting error "Error refreshing state: AccessDenied: Access Denied status code: 403".
or I am getting "Error: error configuring S3 Backend: Error creating AWS session: SharedConfigAssumeRoleError: failed to load assume role for arn:aws:iam::[some account id]:role/[some role name], source profile AWS account 1-umbrella has no shared credentials"
Does anyone has any ideas how I can fix this?
After reading this question How to SSH and run commands in EC2 using boto3?
I try to use SSM to automatically run the command on EC2 instance. However, when I write code like this
def excute_command_on_instance(client, command, instance_id):
response = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': command},
InstanceIds=instance_id,
)
return response
# Using SSM in boto3 to send command to EC2 instances.
ssm_client = boto3.client('ssm')
commands = ['echo "hello world']
instance_id = running_instance[0:1]
excute_command_on_instance(ssm_client, commands, instance_id)
It reminds me that
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:iam::62771xxxx946:user/Python_CloudComputing is not authorized to perform: ssm:SendCommand on resource: arn:aws:ec2:eu-west-2:6277xxxx3946:instance/i-074f862c3xxxxfc07
.
After I use SST to generate credentials for client and I got the code as below.
def excute_command_on_instance(client, command, instance_id):
response = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': command},
InstanceIds=instance_id,
)
return response
# Using SSM in boto3 to send command to EC2 instances.
sts = boto3.client('sts')
sts_response = sts.get_session_token()
ACCESS_KEY = sts_response['Credentials']['AccessKeyId']
SECRET_KEY = sts_response['Credentials']['SecretAccessKey']
ssm_client = boto3.client(
'ssm',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
)
commands = ['echo "hello world']
instance_id = running_instance[0:1]
excute_command_on_instance(ssm_client, commands, instance_id)
However, this time it reminds me that
botocore.exceptions.ClientError: An error occurred (UnrecognizedClientException) when calling the SendCommand operation: The security token included in the request is invalid.
Can anybody tell me how to solve this problem?
You are missing permissions for the IAM user or the Role to access SSM.
You are also trying to use STS to get access which is over complicating what you need to do. The policy that STS needs to assume needs the same permissions. There are many good cases for using STS (the rule of least privilege), but I don't think you need STS here.
Amazon provides predefined policies for SSM that you can quickly add to a policy or role such as:
AmazonEC2RoleForSSM
AmazonSSMFullAccess
AmazonSSMReadOnlyAccess
This link will help you configure access to Systems Manager:
Configuring Access to Systems Manager