Amplify: Failed to get profile: Profile configuration is missing for: undefined - amazon-iam

I've some problems with the amplify cli:
I don't know if it's related to a recent cli update...
amplify init
? Do you want to use an existing environment? Yes
? Choose the environment you would like to use: staging
Using default provider awscloudformation
? Select the authentication method you want to use: AWS profile
Failed to get profile: Profile configuration is missing for: undefined
amplify configure
Specify the AWS Region xxx
Specify the username of the new IAM user: xxx
Complete the user creation using the AWS console
Enter the access key of the newly created user:
accessKeyId: ********************
secretAccessKey: ****************************************
This would update/create the AWS Profile in your local machine
? Profile Name: default
Successfully set up the new user.
amplify push
? Select the authentication method you want to use: AWS profile
Failed to get profile: Profile configuration is missing for: undefined
amplify push
? Select the authentication method you want to use: Amplify Admin UI
OK! this time is working
UPDATE_IN_PROGRESS ...
UPDATE_FAILED DeploymentBucket
AWS::S3::Bucket Thu API: s3:SetBucketEncryption Access Denied
(as admin)
How do I solve this issue?

Ok, I found a solution.
Inside the amplify/.config/local-aws-info.json
change
"staging": {
"configLevel": "amplifyAdmin"
}
with
{
"staging": {
"configLevel": "project",
"useProfile": true,
"profileName": "default"
}
}

Related

eb create can't see Unable to assign role, needs to verify that you have permission to pass this role: aws-elasticbeanstalk-service-role

I have a node app want to upload it on EB.
after running eb init and create the application, tried to eb create I got this
WARNING: Insufficient IAM privileges. Unable to determine if instance profile 'aws-elasticbeanstalk-ec2-role' exists, assuming that it exists.
Creating application version archive "app-230218_020058149734".
Uploading testEBUdg/app-230218_020058149734.zip to S3. This may take a while.
Upload Complete.
Environment details for: testEBUdg-dev
Application name: testEBUdg
Region: us-east-1
Deployed Version: app-230218_020058149734
Environment ID: e-spw6eyzmpe
Platform: arn:aws:elasticbeanstalk:us-east-1::platform/Node.js 14 running on 64bit Amazon Linux 2/5.6.4
Tier: WebServer-Standard-1.0
CNAME: testEBUdg-dev.us-east-1.elasticbeanstalk.com
Updated: 2023-02-18 00:01:36.993000+00:00
Printing Status:
2023-02-18 00:01:35 INFO createEnvironment is starting.
2023-02-18 00:01:37 INFO Using elasticbeanstalk-us-east-1-xxxxxxxxxx as Amazon S3 storage bucket for environment data.
2023-02-18 00:01:38 ERROR Unable to assign role. Please verify that you have permission to pass this role: aws-elasticbeanstalk-service-role.
2023-02-18 00:01:38 ERROR Failed to launch environment.
ERROR: ServiceError - Failed to launch environment.
Permissions policies
It looks like the aws credentials were not configured, so you need to set them up using aws configure and the user need to have the appropriate permission and then perform eb init.

Unable to access AWS account through terraform AWS provider --

I'm facing a issue, status code is:401
"creating ec2 instance: authfailure: aws was not able to validate the provided access credentials │ status code: 401, request id: d103063f-0b26-4b84-9719-886e62b0e2b1"
the instance code:
resource "aws_instance" "test-EC2" {
instance_type = "t2.micro"
ami = "ami-07ffb2f4d65357b42"
}
I have checked the AMI region still not working
any help would be appreciated
I am looking for a way to create and destroy tokens via the management console provided by AWS. I am learning about terraform AWS provider which requires an access key, a secret key and a token.
As stated in the error message :
creating ec2 instance: authfailure: aws was not able to validate the provided access credentials │ status code: 401, request id: d103063f-0b26-4b84-9719-886e62b0e2b1".
It is clear that terraform is not able to authenticate itself using terraform AWS-provider.
You have to have a provider block in your terraform configuration to use one of the supported ways to get authenticated.
provider "aws" {
region = var.aws_region
}
In general, the following are the ways to get authenticated to AWS via the AWS-terraform provider.
Parameters in the provider configuration
Environment variables
Shared credentials files
Shared configuration files
Container credentials
Instance profile credentials and region
For more details, please take a look at: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration
By default, if you are already programmatically signed in to your AWS account AWS-terraform provider will use those credentials.
For example:
If you are using aws_access_key_id and aws_secret_access_key to authenticate yourself then you might have a profile for these credentials. you can check this info in your $HOME/.aws/credentials config file.
export the profile using the below command and you are good to go.
export AWS_PROFILE="name_of_profile_using_secrets"
If you have a SSO user for authentication
Then you might have a sso profile available in $HOME/.aws/config In that case you need to sign in with the respective aws sso profile using the below command
aws sso login --profile <sso_profile_name>
If you don't have a SSO profile yet you can also configure it using the below commands and then export it.
aws configure sso
[....] # configure your SSO
export AWS_PROFILE=<your_sso_profile>
Do you have an aws provider defined in your terraform configuration?
provider "aws" {
region = var.aws_region
profile = var.aws_profile
}
if you are running this locally, please have an IAM user profile set (use aws configure) and export that profile in your current session.
aws configure --profile xxx
export AWS_PROFILE=xxx
once you have the profile set, this should work.
If you are running this deployment in any pipleine like Github Action, you could also make use of OpenId connect to avoid any accesskey and secretkey.
Please find the detailed setup for OpenId connect here.

Error: checking AWS STS access – cannot get role ARN for current session: MissingEndpoint: 'Endpoint' configuration is required for this service

I created a cluster.yaml file which contains the below information:
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-litmus-demo
region: ${AWS_REGION}
version: "1.21"
managedNodeGroups:
- instanceType: m5.large
amiFamily: AmazonLinux2
name: eks-litmus-demo-ng
desiredCapacity: 2
minSize: 2
maxSize: 4
EOF
When i run $ eksctl create cluster -f cluster.yaml to create the cluster through my terminal, I get the below error:
Error: checking AWS STS access – cannot get role ARN for current session: MissingEndpoint: 'Endpoint' configuration is required for this service
How can I resolve this? Please help!!!
Note: I have the global and regional endpoints under STS set to "valid in all AWS regions".
In my case, it was a typo in the region. I had us-east1 as the value. When it is corrected to us-east-1, the error disappeared. So it is worth checking if there are typos in any of the fields.
mention --profile if you use any aws profile other than default
eksctl create cluster -f cluster.yaml --profile <profile-name>
My SSO session token had expired:
aws sts get-caller-identity --profile default
The SSO session associated with this profile has expired or is otherwise invalid. To refresh this SSO session run aws sso login with the corresponding profile.
Then I needed to refresh my SSO session token:
aws sso login
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:
https://device.sso.us-east-2.amazonaws.com/
Then enter the code:
XXXX-XXXX
Successfully logged into Start URL: https://XXXX.awsapps.com/start
Error: checking AWS STS access – cannot get role ARN for current session:
According to this, I think its not able to get the role (in your case, cluster creator's role) which is responsible to create the cluster.
Create an IAM user with appropriate role. Attach necessary policies to that role to create the EKS cluster.
Then you can use aws configure command to add the AWS Access Key ID, AWS Secret Access Key, and Default region name.
[Make sure that the user has the appropriate access to create and access the eks cluster in your aws account. You can use aws cli to verify if you have the appropriate access]
It is important to configure the default profile for AWS CLI correctly on the command line using
set AWS_ACCESS_KEY_ID <your_access_key>
set AWS_SECRET_ACCESS_KEY <your_secret_key>

Invalid Terraform AWS provider credentials when passing AWS system manager parameter store variables

Background:
I'm using an AWS CodeBuild buildspec.yml to iterate through directories from a GitHub repo to apply IaC using Terraform. To access the credentials needed for the Terraform AWS provider, I used AWS system manager parameter store to retrieve the access and secret key within the buildspec.yml.
Problem:
The system manager parameter store masks the access and secret key env value so when they are inherited by the Terraform AWS provider, the provider outputs that the credentials are invalid:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: xxxx
To reproduce the problem:
Create system manager parameter store variables (TF_VAR_AWS_ACCESS_KEY_ID=access, TF_AWS_SECRET_ACCESS_KEY=secret)
Create AWS CodeBuild project with:
"source": {
"type": "NO_SOURCE",
}
"environment": {
"type": "LINUX_CONTAINER",
"image": "aws/codebuild/standard:4.0",
"computeType": "BUILD_GENERAL1_SMALL"
}
buildspec.yml with the following: (modified to create .tf files instead of sourcing from github)
version: 0.2
env:
shell: bash
parameter-store:
TF_VAR_AWS_ACCESS_KEY_ID: TF_AWS_ACCESS_KEY_ID
TF_VAR_AWS_SECRET_ACCESS_KEY: TF_AWS_SECRET_ACCESS_KEY
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip -q
- unzip terraform_0.12.28_linux_amd64.zip && mv terraform /usr/local/bin/
- printf "provider "aws" {\n\taccess_key = var.AWS_ACCESS_KEY_ID\n\tsecret_key = var.AWS_SECRET_ACCESS_KEY\n\tversion = \"~> 3.2.0\"\n}" >> provider.tf
- printf "variable "AWS_ACCESS_KEY_ID" {}\nvariable "AWS_SECRET_ACCESS_KEY" {}" > vars.tf
- printf "resource \"aws_s3_bucket\" \"test\" {\n\tbucket = \"test\"\n\tacl = \"private\"\n}" >> s3.tf
- terraform init
- terraform plan
Attempts:
Passing creds through terraform -vars option:
terraform plan -var="AWS_ACCESS_KEY_ID=$TF_VAR_AWS_ACCESS_KEY_ID" -var="AWS_ACCESS_KEY_ID=$TF_VAR_AWS_SECRET_ACCESS_KEY"
but I get the same invalid credentials error
Export system manager parameter store credentials within buildspec.yml:
commands:
- export AWS_ACCESS_KEY_ID=$TF_VAR_AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$TF_VAR_AWS_SECRET_ACCESS_KEY
which results in duplicate masked variables and the same error above. printenv output within buildspec.yml:
AWS_ACCESS_KEY_ID=***
TF_VAR_AWS_ACCESS_KEY_ID=***
AWS_SECRET_ACCESS_KEY=***
TF_VAR_AWS_SECRET_ACCESS_KEY=***
Possible solution routes:
Somehow pass the MASKED parameter store credential values into Terraform successfully (preferred)
Pass sensitive credentials into the Terraform AWS provider using a different method e.g. AWS secret manager, IAM role, etc.
Unmask the parameter store variables to pass into the aws provider (probably defeats the purpose of using aws system manager in the first place)
I experienced this same issue when working with Terraform on Ubuntu 20.04.
I had configured the AWS CLI using the aws configure command with an IAM credential for the terraform user I created on AWS.
However, when I run the command:
terraform plan
I get the error:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 17268b96-6451-4527-8b17-0312f49eec51
Here's how I fixed it:
The issue was caused as a result of the misconfiguration of my AWS CLI using the aws configure command. I had inputted the AWS Access Key ID where I was to input AWS Secret Access Key and also inputted AWS Secret Access Key where I was to input AWS Access Key ID:
I had to run the command below to re-configure the AWS CLI correctly with an IAM credential for the terraform user I created on AWS:
aws configure
You can confirm that it is fine by running a simple was cli command:
aws s3 ls
If you get an error like the one below, then you know you're still not setup correctly yet:
An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.
That's all.
I hope this helps
Pass sensitive credentials into the Terraform AWS provider using a different method e.g. AWS secret manager, IAM role, etc.
Generally you wouldn't need to hard-code AWS credentials for terraform to work. Instead CodeBuild IAM role should be enough for terraform, as explain in terraform docs.
Having this in mind, I verified that the following works and creates the bucket requested using terraform from CodeBuild project. The default CB role was modified with S3 permissions to allow creation of the bucket.
version: 0.2
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip -q
- unzip terraform_0.12.28_linux_amd64.zip && mv terraform /usr/local/bin/
- printf "resource \"aws_s3_bucket\" \"test\" {\n\tbucket = \"test-43242-efdfdfd-4444334\"\n\tacl = \"private\"\n}" >> s3.tf
- terraform init
- terraform plan
- terraform apply -auto-approve
Well my case was quite foolish but it might help:
So after downloading the .csv file I copy paste the keys with aws configure.
In the middle of the secret key there was a "+". In the editor I use the double click to copy, however will stop when meeting a non alphanumeric character, meaning that only the first part of the secret access key was copied.
Make sure that you had dutifully copied the full secret key.
I had a 403 error.
Issue is - you should remove {} from example code.
provider "aws" {
access_key = "{YOUR ACCESS KEY}"
secret_key = "{YOUR SECRET KEY}"
region = "eu-west-1"
}
it should look like,
provider "aws" {
access_key = "YOUR ACCESS KEY"
secret_key = "YOUR SECRET KEY"
region = "eu-west-1"
}
i have faced this issue multiple times
the solution is to create user in AWS from IAM Management console and
the error will be fixed

"kubectl" not connecting to aws EKS cluster from my local windows workstation

I am trying to setup aws EKS cluster and want to connect that cluster from my local windows workstation. Not able to connect that. Here are the steps i did;
Create a aws service role (aws console -> IAM -> Roles -> click "Create role" -> Select AWS service role "EKS" -> give role name "eks-role-1"
Create another user in IAM named "eks" for programmatic access. this will help me to connect my EKS cluster from my local windows workstation. Policy i added into it is "AmazonEKSClusterPolicy", "AmazonEKSWorkerNodePolicy", "AmazonEKSServicePolicy", "AmazonEKS_CNI_Policy".
Next EKS cluster has been created with roleARN, which has been created in Step#1. Finally EKS cluster has been created in aws console.
In my local windows workstation, i have download "kubectl.exe" & "aws-iam-authenticator.exe" and did 'aws configure' using accesskey and token from step#2 for the user "eks". After configuring "~/.kube/config"; i ran below command and get error like this:
Command:kubectl.exe get svc
output:
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Unable to connect to the server: getting credentials: exec: exit status 1
Not sure what wrong setup here. Can someone pls help? I know some of the places its saying you have to use same aws user to connect cluster (EKS). But how can i get accesskey and token for aws assign-role (step#2: eks-role-1)?
For people got into this, may be you provision eks with profile.
EKS does not add profile inside kubeconfig.
Solution:
export AWS credential
$ export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx
$ export AWS_SECRET_ACCESS_KEY=ssssssssss
If you've already config AWS credential. Try export AWS_PROFILE
$ export AWS_PROFILE=ppppp
Similar to 2, but you just need to do one time. Edit your kubeconfig
users:
- name: eks # This depends on your config.
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "general"
env:
- name: AWS_PROFILE
value: "<YOUR_PROFILE_HERE>" #
I think i got the answer for this issue; want to write down here so people will be benefit out of it.
When you first time creating EKS cluster; check from which you are (check your aws web console user setting) creating. Even you are creating from CFN script, also assign different role to create the cluster. You have to get CLI access for the user to start access your cluster from kubectl tool. Once you get first time access (that user will have admin access by default); you may need to add another IAM user into cluster admin (or other role) using congifMap; then only you can switch or use alternative IAM user to access cluster from kubectl command line.
Make sure the file ~/.aws/credentials has a AWS key and secret key for an IAM account that can manage the cluster.
Alternatively you can set the AWS env parameters:
export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=ssssssssss
Adding another option.
Instead of working with aws-iam-authenticator you can change the command to aws and replace the args as below:
- name: my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args: #<--- Change the args
- --region
- <YOUR_REGION>
- eks
- get-token
- --cluster-name
- my-cluster
command: aws #<--- Change to command to aws
env:
- name: AWS_PROFILE
value: <YOUR_PROFILE_HERE>