I have a kubernates pod which is trying to access the aws kms service using the aws java sdk to decrypt the password and I have the valid I am role attached to the pod but the request is failing with the below error,
{
"message": "Service Unavailable: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)),
SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), com.amazonaws.auth.profile.ProfileCredentialsProvider#219aa2a6: profile file cannot be null,
com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#399fc8ea: Internal Server Error (Service: null; Status Code: 500; Error Code: null; Request ID: null)]"
}
below is the code part which supposes to invoke the kms
private final boolean kmsEnabled;
private final AWSKMS kmsClient;
public KmsKeyManager(#Value("${kms.enabled}") final boolean kmsEnabled,
#Value("${kms.endpoint}") final String kmwEndPoint,
#Value("${aws.region}") final String awsRegion) {
AwsClientBuilder.EndpointConfiguration endpointConfig = new AwsClientBuilder.EndpointConfiguration(kmwEndPoint, awsRegion);
kmsClient = AWSKMSClientBuilder.standard()
.withCredentials(new DefaultAWSCredentialsProviderChain())
.withEndpointConfiguration(endpointConfig)
.build();
this.kmsEnabled = kmsEnabled;
}
You may have to create a VPC endpoint since EKS pods may not have an external IP and thus will not be able to connect to KMS
The error message here contains the hint what is missing:
Unable to load AWS credentials from any provider in the chain …
See the docs how to provide the necessary credentials.
Related
I'm facing a issue, status code is:401
"creating ec2 instance: authfailure: aws was not able to validate the provided access credentials │ status code: 401, request id: d103063f-0b26-4b84-9719-886e62b0e2b1"
the instance code:
resource "aws_instance" "test-EC2" {
instance_type = "t2.micro"
ami = "ami-07ffb2f4d65357b42"
}
I have checked the AMI region still not working
any help would be appreciated
I am looking for a way to create and destroy tokens via the management console provided by AWS. I am learning about terraform AWS provider which requires an access key, a secret key and a token.
As stated in the error message :
creating ec2 instance: authfailure: aws was not able to validate the provided access credentials │ status code: 401, request id: d103063f-0b26-4b84-9719-886e62b0e2b1".
It is clear that terraform is not able to authenticate itself using terraform AWS-provider.
You have to have a provider block in your terraform configuration to use one of the supported ways to get authenticated.
provider "aws" {
region = var.aws_region
}
In general, the following are the ways to get authenticated to AWS via the AWS-terraform provider.
Parameters in the provider configuration
Environment variables
Shared credentials files
Shared configuration files
Container credentials
Instance profile credentials and region
For more details, please take a look at: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration
By default, if you are already programmatically signed in to your AWS account AWS-terraform provider will use those credentials.
For example:
If you are using aws_access_key_id and aws_secret_access_key to authenticate yourself then you might have a profile for these credentials. you can check this info in your $HOME/.aws/credentials config file.
export the profile using the below command and you are good to go.
export AWS_PROFILE="name_of_profile_using_secrets"
If you have a SSO user for authentication
Then you might have a sso profile available in $HOME/.aws/config In that case you need to sign in with the respective aws sso profile using the below command
aws sso login --profile <sso_profile_name>
If you don't have a SSO profile yet you can also configure it using the below commands and then export it.
aws configure sso
[....] # configure your SSO
export AWS_PROFILE=<your_sso_profile>
Do you have an aws provider defined in your terraform configuration?
provider "aws" {
region = var.aws_region
profile = var.aws_profile
}
if you are running this locally, please have an IAM user profile set (use aws configure) and export that profile in your current session.
aws configure --profile xxx
export AWS_PROFILE=xxx
once you have the profile set, this should work.
If you are running this deployment in any pipleine like Github Action, you could also make use of OpenId connect to avoid any accesskey and secretkey.
Please find the detailed setup for OpenId connect here.
I have a service that is deployed using Kubernetes and Docker.
To call Amazon service (SP-API) we have created a role (SellerRole) as per this document. https://developer-docs.amazon.com/sp-api/docs/creating-and-configuring-iam-policies-and-entities
We have one user who has this role assigned. Now, using the user if we do assume the above role (SellerRole) it gives us the temporary credentials and works fine.
Since the service is deployed using K8 I am trying to use the IRSA and role to do the same thing.
I have created an IRSA and given the K8 cluster a role (PODRole). The PODRole has access to assume SellerRole. Also, the pods have a token file that confirms that the pods are configured correctly.
Now the issue is when I ssh into POD and do
aws sts assume-role --role-name SellerRole --session-name piyush-session
it works correctly and gives back the temp credentials.
Hoowever when I try to do the same from code it gives an error. Below is the code.
StsClient.builder()
.region(region)
.credentialsProvider(WebIdentityTokenFileCredentialsProvider.create())
.build();
AssumeRoleRequest roleRequest =
AssumeRoleRequest.builder()
.roleArn("SellerRole")
.roleSessionName("SessionName")
.build();
AssumeRoleResponse roleResponse = stsClient.assumeRole(roleRequest);
Credentials credentials = roleResponse.credentials();
Below is the error.
Unable to assume role. Exception: software.amazon.awssdk.services.sts.model.StsException: User: arn:aws:sts::id:assumed-role/eks-qa01-PODRole/aws-sdk-java-1661372423393 is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::12345678999:role/eks-qa01-PODRole (Service: Sts, Status Code: 403, Request ID: b6a8f294-52d8-450f-9698)
I'm trying to use terraform to initiate connections with AWS to create infra.
If I run up aws configure sso, i can log in default to eu-west-2 and move around the estate
I then use terraform apply, with the aws part as follows:
provider "aws" {
region = "eu-west-2"
shared_credentials_file = "~/.aws/credentials"
profile = "450694575897_ProdPS-SuperUsers"
}
Terraform reports: Error: error using credentials to get account ID: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
│ status code: 403, request id: 5b8be53d-253d-4c48-8568-ad78be14115f
The following vars are set:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
If I run
aws sts get-session-token --region=us-west-2
I get
An error occurred (InvalidClientTokenId) when calling the GetSessionToken operation: The security token included in the request is invalid.
I was having the same problem when i tried to deploy through terraform cloud.
You might be using an old key that is either deleted or inactive, to be sure:
1- Try to go to the security credentials on your account page: Click on your name in the top right corner -> My security credentials.
2- Check if the key you set in your credentials is deleted or still exists.
2.2- if it's deleted create a new key and use it.
3- If your key is still there, check if it is active.
I solved the issue doing the following:
$: aws configure
enter the access key:
enter the secret key:
select default region:
select default format[none/json]:
In your main.tf file add the profile shown as below
provider "aws" {
region = "eu-west-2"
profile="xxxuuzzz"
}
This is my terraform set up. When I used an Access Key and a Secret Key in a different account, I had no problems initializing terraform. But now that I'm using SSO with this account, I get this error:
Error loading state:
AccessDenied: Access Denied
status code: 403, request id: xxx, host id: xxxx
Then I found this in a terraform document. Not sure if I understand this correctly, but am I getting this error because I am using SSO? If so, what do I need to do to fix this and get terraform to work with this account.
"Please note that the AWS Go SDK, the underlying authentication handler used by the Terraform AWS Provider, does not support all AWS CLI features, such as Single Sign On (SSO) configuration or credentials."
Note: "my-bucket" was previously created in this account using the CLI.
provider "aws" {
region = "us-east-1"
profile = "XXXXX"
}
terraform {
required_version = "~> 0.13.0"
backend "s3" {
bucket = "mybucket"
key = "mykey"
region = "us-east-1"
}
}
I am having this exact same issue with terraform and sso, will update if I find solution. * Update, in my case it was because the state bucket had an explicit deny for unencrypted transfers. I added encrypt = true to my tfstate backend and it worked fine. https://www.terraform.io/docs/language/settings/backends/s3.html#s3-state-storage
DynamoDBLocal is rejecting my credentials in spite of the documentation indicating that valid credentials are unnecessary:
The AWS SDKs for DynamoDB require that your application configuration specify an access key value and an AWS region value...these do not have to be valid AWS values in order to run locally.
In this case, I've set up my credentials ~/.aws/credentials as:
[default]
aws_access_key_id = BogusAwsAccessKeyId
aws_secret_access_key = BogusAwsSecretAccessKey
run DynamoDBLocal using:
java -Djava.library.path=./DynamoDBLoc_lib -jar DynamoDBLocal.jar
checked that it's working by hitting http://localhost:8000/shell/
then run my test Java app:
DefaultAWSCredentialsProviderChain credentialProvider = new DefaultAWSCredentialsProviderChain();
AWSCredentials awsCredentials = credentialProvider.getCredentials();
log.info("creds \"{}\", \"{}\"", awsCredentials.getAWSAccessKeyId(), awsCredentials.getAWSSecretKey());
AmazonDynamoDBClient client = new AmazonDynamoDBClient(credentialProvider);
client.withEndpoint("http://localhost:8000");
client.withRegion(Regions.US_WEST_2);
dynamoDB = new DynamoDB(client);
try {
TableCollection<ListTablesResult> tables = dynamoDB.listTables();
while (tables.iterator().hasNext()) { // <-- exception thrown here
log.info(tables.iterator().next().getTableName());
}
} catch (Exception e) {
log.error("", e);
}
which results in this output:
creds "BogusAwsAccessKeyId", "BogusAwsSecretAccessKey"
com.amazonaws.AmazonServiceException: The security token included in the request is invalid. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: UnrecognizedClientException
Any thoughts on why it is concerned with the validity of the credentials?
In your code, you are calling withRegion() after calling setEndpoint(). The call to withRegion() is setting the endpoint to DynamoDB's us-west-2 endpoint and that's why your authentication is failing (because it's actually going to the DynamoDB us-west-2 region). Remove the withRegion() line.