I am trying to upload a custom Public keypair to my Amazon AWS account because I would like to use my custom-generated keypair for communication with AWS. I am trying to perform this upload using Ansible's ec2_key module.
Here is what I have done so far:
STEP 1. Sign up for Amazon AWS account here. I entered an "AWS Account Name" and password.
STEP 2. Install Python packages AWS CLI and boto:
$ pip install awscli boto
STEP 3. Generate SSH keypair (I used Ansible for this as well):
- name: Generate a 2048-bit SSH key for user
user:
name: "{{ ansible_ssh_user }}"
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: ~/.ssh/id_rsa
STEP 4. I copied the contents of the public key (~/.ssh/id_rsa.pub) into /home/username/.aws/credentials.
STEP 5. Use Ansible task to upload public key to Amazon AWS with Ansible:
vars:
aws_access_key_id: my_key_name
aws_region: "us-west-2"
aws_secret_access_key: "ssh-rsa Y...r"
tasks:
- name: example3 ec2 key
ec2_key:
name: "{{ aws_access_key_id }}"
region: "{{ aws_region }}"
key_material: "{{ aws_secret_access_key }}"
state: present
force: True
validate_certs: yes
The output of step 5. is:
An exception occurred during task execution. ..."module_stderr": "Traceback (most
recent call last):\n File \"/tmp/ansible_WqbqHU/ansible_module_ec2_key.py\",...
raise self.ResponseError(response.status, response.reason,
body)\nboto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized\n<?xml
version=\"1.0\" encoding=\"UTF-8\"?>
\n<Response><Errors><Error><Code>AuthFailure</Code><Message>AWS was not able to
validate the provided access credentials</Message></Error></Errors>...
Here is my /home/username/.aws/credentials (I just made up some key_id):
[default]
aws_access_key_id = my_key_name
aws_secret_access_key = ssh-rsa Y...r
Here is my /home/username/.aws/config:
[default]
output = json
region = us-west-2
Both of these files seem to agree with the AWS doc requirements here.
Additional Info:
Host system: Ubuntu 17.10 (non-root user)
The 2 Ansible tasks are run from separate Ansible playbooks - first the sshkeygen playbook is run and then the ec2_key playbook is run. Ansible playbooks are not run using become.
Ansible version = ansible==2.4.1.0
Boto version = boto==2.48.0, botocore==1.7.47
Questions
How can I instruct AWSCLI to communicate with my online account (STEP 1.)? It seems like I am missing this step somewhere in the Ansible task using the ec2_key module.
Currently, I have the SAME public key in (a) the 2nd Ansible task to upload the public key and (b) /home/username/.aws/credentials. Is this Ansible task missing something/incorrect? Should there be a 2nd public key?
You've put the SSH public key into the secret_access_key field.
It looks like this for me (letters mixed and replaced here of course, not my real key):
[Credentials]
aws_access_key_id = FMZAIQGTCHSLGSNDIPXT
aws_secret_access_key = gcmbyMs1Osj3ISCSasFtEx7sVCr92S3Mvjxlcwav
If you go to IAM (https://console.aws.amazon.com/iam), you can regenerate your keys.
You'll need to go to IAM->Users, click your username, click the Security Credentials, and "Create Access Key".
If you've just set up your account, it's likely that you don't have IAM users, only the so-called root account user (the one you signed up with). In this case, click your name at the top of the main screen, and select My Security Credentials. You might get a warning, but no worries in your case, you're not running a large organization. Click the Access Keys dropdown and click Create New Access Key (you might have none). This will give you the keys you need. Save them somewhere, because when you leave the screen, you'll no longer get the chance to see the secret access key, only the key ID.
However, if you are using a machine with a role attached, you don't need credentials at all, boto should pick them up.
Your secret access key looks wrong in your credentials. That should be associated with an IAM user or left blank if you’re running from an EC2 instance with an IAM role attached; not the key you’re trying to upload.
Can you SSH to the host already? I think the unauthorised error is coming from your attempted access to the host.
Something like:
ssh ec2-user#165.0.0.105 - i my_key.pem
You can see your public ip address in the aws console. If you face errors there, double check the instance is accepting ssh traffic in the security profile.
Related
I have an ansible-playbook, which will connect to GCP using SA and its JSON file.
I have downloaded the JSONn file in my local and provided the path value to "credentials_file". this works if I run the playbook from my local machine.
Now, I want to run this playbook using awx and below are the steps I have done.
Created a Credential.
a. Credential Type: Google Compute Engine
b. name: ansible-gcp-secret
c. under type details, I have uploaded the SAJSONn file and it loaded the rest of the data such as SA email, project and RSA key.
Created project and synched my git repo, which has my playbook.
Created a template to run my playbook.
Now, I am not sure how to use the GCP SA credentials in AWX to run my playbook. Any help or documentation would greatly help.
Below is example of my playbook.
- name: Update Machine Type of GCE Instance
hosts: localhost
gather_facts: no
connection: local
vars:
instance_name: ansible-test
machine_type: e2-medium
image: Debian GNU/Linux 11 (bullseye)
zone: us-central1-a
service_account_email: myuser#project-stg-xxxxx.iam.gserviceaccount.com
credentials_file: /Users/myuser/ansible/hackthonproject-stg-xxxxx-67d90cb0819c.json
project_id: project-stg-xxxxx
tasks:
- name: Stop(Terminate) a instance
gcp_compute_instance:
name: "{{instance_name}}"
project: "{{ project_id }}"
zone: "{{zone}}"
auth_kind: serviceaccount
service_account_file: "{{ credentials_file }}"
status: TERMINATED
Below are the steps we did.
Created credential type in AWX to pull the secrets from the vault. Let's say secret_type. This will give out of env key "vaultkv_secret".
Created a secret to connect to the vault using a token with type=HC Vault secret lookup, name=vault_token
Create a another secret to pull the secret(kv type) from vault with type=custom_vault_puller (this used the first secret create "vault_toke"). Let say name=secret_for_template
Create kv secret in the vault and provide the key and JSON content as value.
Create a template and used the secret "secret_for_template". and provide the secret path and key.
Now, when the template is run, the env var "vaultkv_secret" will have the content of the JSON file. and we can save those content as file and use it as file input to GCP commands.
I created a cluster.yaml file which contains the below information:
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-litmus-demo
region: ${AWS_REGION}
version: "1.21"
managedNodeGroups:
- instanceType: m5.large
amiFamily: AmazonLinux2
name: eks-litmus-demo-ng
desiredCapacity: 2
minSize: 2
maxSize: 4
EOF
When i run $ eksctl create cluster -f cluster.yaml to create the cluster through my terminal, I get the below error:
Error: checking AWS STS access – cannot get role ARN for current session: MissingEndpoint: 'Endpoint' configuration is required for this service
How can I resolve this? Please help!!!
Note: I have the global and regional endpoints under STS set to "valid in all AWS regions".
In my case, it was a typo in the region. I had us-east1 as the value. When it is corrected to us-east-1, the error disappeared. So it is worth checking if there are typos in any of the fields.
mention --profile if you use any aws profile other than default
eksctl create cluster -f cluster.yaml --profile <profile-name>
My SSO session token had expired:
aws sts get-caller-identity --profile default
The SSO session associated with this profile has expired or is otherwise invalid. To refresh this SSO session run aws sso login with the corresponding profile.
Then I needed to refresh my SSO session token:
aws sso login
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:
https://device.sso.us-east-2.amazonaws.com/
Then enter the code:
XXXX-XXXX
Successfully logged into Start URL: https://XXXX.awsapps.com/start
Error: checking AWS STS access – cannot get role ARN for current session:
According to this, I think its not able to get the role (in your case, cluster creator's role) which is responsible to create the cluster.
Create an IAM user with appropriate role. Attach necessary policies to that role to create the EKS cluster.
Then you can use aws configure command to add the AWS Access Key ID, AWS Secret Access Key, and Default region name.
[Make sure that the user has the appropriate access to create and access the eks cluster in your aws account. You can use aws cli to verify if you have the appropriate access]
It is important to configure the default profile for AWS CLI correctly on the command line using
set AWS_ACCESS_KEY_ID <your_access_key>
set AWS_SECRET_ACCESS_KEY <your_secret_key>
Background:
I'm using an AWS CodeBuild buildspec.yml to iterate through directories from a GitHub repo to apply IaC using Terraform. To access the credentials needed for the Terraform AWS provider, I used AWS system manager parameter store to retrieve the access and secret key within the buildspec.yml.
Problem:
The system manager parameter store masks the access and secret key env value so when they are inherited by the Terraform AWS provider, the provider outputs that the credentials are invalid:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: xxxx
To reproduce the problem:
Create system manager parameter store variables (TF_VAR_AWS_ACCESS_KEY_ID=access, TF_AWS_SECRET_ACCESS_KEY=secret)
Create AWS CodeBuild project with:
"source": {
"type": "NO_SOURCE",
}
"environment": {
"type": "LINUX_CONTAINER",
"image": "aws/codebuild/standard:4.0",
"computeType": "BUILD_GENERAL1_SMALL"
}
buildspec.yml with the following: (modified to create .tf files instead of sourcing from github)
version: 0.2
env:
shell: bash
parameter-store:
TF_VAR_AWS_ACCESS_KEY_ID: TF_AWS_ACCESS_KEY_ID
TF_VAR_AWS_SECRET_ACCESS_KEY: TF_AWS_SECRET_ACCESS_KEY
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip -q
- unzip terraform_0.12.28_linux_amd64.zip && mv terraform /usr/local/bin/
- printf "provider "aws" {\n\taccess_key = var.AWS_ACCESS_KEY_ID\n\tsecret_key = var.AWS_SECRET_ACCESS_KEY\n\tversion = \"~> 3.2.0\"\n}" >> provider.tf
- printf "variable "AWS_ACCESS_KEY_ID" {}\nvariable "AWS_SECRET_ACCESS_KEY" {}" > vars.tf
- printf "resource \"aws_s3_bucket\" \"test\" {\n\tbucket = \"test\"\n\tacl = \"private\"\n}" >> s3.tf
- terraform init
- terraform plan
Attempts:
Passing creds through terraform -vars option:
terraform plan -var="AWS_ACCESS_KEY_ID=$TF_VAR_AWS_ACCESS_KEY_ID" -var="AWS_ACCESS_KEY_ID=$TF_VAR_AWS_SECRET_ACCESS_KEY"
but I get the same invalid credentials error
Export system manager parameter store credentials within buildspec.yml:
commands:
- export AWS_ACCESS_KEY_ID=$TF_VAR_AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$TF_VAR_AWS_SECRET_ACCESS_KEY
which results in duplicate masked variables and the same error above. printenv output within buildspec.yml:
AWS_ACCESS_KEY_ID=***
TF_VAR_AWS_ACCESS_KEY_ID=***
AWS_SECRET_ACCESS_KEY=***
TF_VAR_AWS_SECRET_ACCESS_KEY=***
Possible solution routes:
Somehow pass the MASKED parameter store credential values into Terraform successfully (preferred)
Pass sensitive credentials into the Terraform AWS provider using a different method e.g. AWS secret manager, IAM role, etc.
Unmask the parameter store variables to pass into the aws provider (probably defeats the purpose of using aws system manager in the first place)
I experienced this same issue when working with Terraform on Ubuntu 20.04.
I had configured the AWS CLI using the aws configure command with an IAM credential for the terraform user I created on AWS.
However, when I run the command:
terraform plan
I get the error:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 17268b96-6451-4527-8b17-0312f49eec51
Here's how I fixed it:
The issue was caused as a result of the misconfiguration of my AWS CLI using the aws configure command. I had inputted the AWS Access Key ID where I was to input AWS Secret Access Key and also inputted AWS Secret Access Key where I was to input AWS Access Key ID:
I had to run the command below to re-configure the AWS CLI correctly with an IAM credential for the terraform user I created on AWS:
aws configure
You can confirm that it is fine by running a simple was cli command:
aws s3 ls
If you get an error like the one below, then you know you're still not setup correctly yet:
An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.
That's all.
I hope this helps
Pass sensitive credentials into the Terraform AWS provider using a different method e.g. AWS secret manager, IAM role, etc.
Generally you wouldn't need to hard-code AWS credentials for terraform to work. Instead CodeBuild IAM role should be enough for terraform, as explain in terraform docs.
Having this in mind, I verified that the following works and creates the bucket requested using terraform from CodeBuild project. The default CB role was modified with S3 permissions to allow creation of the bucket.
version: 0.2
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip -q
- unzip terraform_0.12.28_linux_amd64.zip && mv terraform /usr/local/bin/
- printf "resource \"aws_s3_bucket\" \"test\" {\n\tbucket = \"test-43242-efdfdfd-4444334\"\n\tacl = \"private\"\n}" >> s3.tf
- terraform init
- terraform plan
- terraform apply -auto-approve
Well my case was quite foolish but it might help:
So after downloading the .csv file I copy paste the keys with aws configure.
In the middle of the secret key there was a "+". In the editor I use the double click to copy, however will stop when meeting a non alphanumeric character, meaning that only the first part of the secret access key was copied.
Make sure that you had dutifully copied the full secret key.
I had a 403 error.
Issue is - you should remove {} from example code.
provider "aws" {
access_key = "{YOUR ACCESS KEY}"
secret_key = "{YOUR SECRET KEY}"
region = "eu-west-1"
}
it should look like,
provider "aws" {
access_key = "YOUR ACCESS KEY"
secret_key = "YOUR SECRET KEY"
region = "eu-west-1"
}
i have faced this issue multiple times
the solution is to create user in AWS from IAM Management console and
the error will be fixed
I am trying to setup aws EKS cluster and want to connect that cluster from my local windows workstation. Not able to connect that. Here are the steps i did;
Create a aws service role (aws console -> IAM -> Roles -> click "Create role" -> Select AWS service role "EKS" -> give role name "eks-role-1"
Create another user in IAM named "eks" for programmatic access. this will help me to connect my EKS cluster from my local windows workstation. Policy i added into it is "AmazonEKSClusterPolicy", "AmazonEKSWorkerNodePolicy", "AmazonEKSServicePolicy", "AmazonEKS_CNI_Policy".
Next EKS cluster has been created with roleARN, which has been created in Step#1. Finally EKS cluster has been created in aws console.
In my local windows workstation, i have download "kubectl.exe" & "aws-iam-authenticator.exe" and did 'aws configure' using accesskey and token from step#2 for the user "eks". After configuring "~/.kube/config"; i ran below command and get error like this:
Command:kubectl.exe get svc
output:
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Unable to connect to the server: getting credentials: exec: exit status 1
Not sure what wrong setup here. Can someone pls help? I know some of the places its saying you have to use same aws user to connect cluster (EKS). But how can i get accesskey and token for aws assign-role (step#2: eks-role-1)?
For people got into this, may be you provision eks with profile.
EKS does not add profile inside kubeconfig.
Solution:
export AWS credential
$ export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx
$ export AWS_SECRET_ACCESS_KEY=ssssssssss
If you've already config AWS credential. Try export AWS_PROFILE
$ export AWS_PROFILE=ppppp
Similar to 2, but you just need to do one time. Edit your kubeconfig
users:
- name: eks # This depends on your config.
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "general"
env:
- name: AWS_PROFILE
value: "<YOUR_PROFILE_HERE>" #
I think i got the answer for this issue; want to write down here so people will be benefit out of it.
When you first time creating EKS cluster; check from which you are (check your aws web console user setting) creating. Even you are creating from CFN script, also assign different role to create the cluster. You have to get CLI access for the user to start access your cluster from kubectl tool. Once you get first time access (that user will have admin access by default); you may need to add another IAM user into cluster admin (or other role) using congifMap; then only you can switch or use alternative IAM user to access cluster from kubectl command line.
Make sure the file ~/.aws/credentials has a AWS key and secret key for an IAM account that can manage the cluster.
Alternatively you can set the AWS env parameters:
export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=ssssssssss
Adding another option.
Instead of working with aws-iam-authenticator you can change the command to aws and replace the args as below:
- name: my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args: #<--- Change the args
- --region
- <YOUR_REGION>
- eks
- get-token
- --cluster-name
- my-cluster
command: aws #<--- Change to command to aws
env:
- name: AWS_PROFILE
value: <YOUR_PROFILE_HERE>
I used Ansible to create a gce cluster following the guideline at: https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html
And at the end of the GCE creations, I used the add_host Ansible module to register all instances in their corresponding groups. e.g. gce_master_ip
But then when I try to run the following tasks after the creation task, they would not work:
- name: Create redis on the master
hosts: gce_master_ip
connection: ssh
become: True
gather_facts: True
vars_files:
- gcp_vars/secrets/auth.yml
- gcp_vars/machines.yml
roles:
- { role: redis, tags: ["redis"] }
Within the auth.yml file I already provided the service account email, path to the json credential file and the project id. But apparently that's not enough. I got errors like below:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey).\r\n", "unreachable": true}
This a typical ssh username and credentials not permitted or not provided. In this case I would say I did not setup anything of the username and private key for the ssh connection that Ansible will use to do the connecting.
Is there anything I should do to make sure the corresponding credentials are provided to establish the connection?
During my search I think one question just briefly mentioned that you could use the gcloud compute ssh... command. But is there a way I could specify in Ansible to not using the classic ssh and use the gcloud one?
To have Ansible SSH into a GCE instance, you'll have to supply an SSH username and private key which corresponds to the the SSH configuration available on the instance.
So the question is: If you've just used the gcp_compute_instance Ansible module to create a fresh GCE instance, is there a convenient way to configure SSH on the instance without having to manually connect to the instance and do it yourself?
For this purpose, GCP provides a couple of ways to automate and manage key distribution for GCE instances.
For example, you could use the OS Login feature. To use OS Login with Ansible:
When creating the instance using Ansible, Enable OS Login on the target instance by setting the "enable-oslogin" metadata field to "TRUE" via the metadata parameter.
Make sure the Service Account attached to the instance that runs Ansible have both the roles/iam.serviceAccountUser and roles/compute.osLoginAdmin permissions.
Either generate a new or choose an existing SSH keypair that will be deployed to the target instance.
Upload the public key for use with OS Login: This can be done via gcloud compute os-login ssh-keys add --key-file [KEY_FILE_PATH] --ttl [EXPIRE_TIME] (where --ttl specifies how long you want this public key to be usable - for example, --ttl 1d will make it expire after 1 day)
Configure Ansible to use the Service Account's user name and the private key which corresponds to the public key uploaded via the gcloud command. For example by overriding the ansible_user and ansible_ssh_private_key_file inventory parameters, or by passing --private-key and --user parameters to ansible-playbook.
The service account username is the username value returned by the gcloud command above.
Also, if you want to automatically set the enable-oslogin metadata field to "TRUE" across all instances in your GCP project, you can simply add a project-wide metadata entry. This can be done in the Cloud Console under "Compute Engine > Metadata".