Error `executable aws not found` with kubectl config defined by `aws eks update-kubeconfig` - amazon-web-services

I defined my KUBECONFIG for the AWS EKS cluster:
aws eks update-kubeconfig --region eu-west-1 --name yb-demo
but got the following error when using kubectl:
...
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
[opc#C eks]$ kubectl get sc
Unable to connect to the server: getting credentials: exec: executable aws not found
It looks like you are trying to use a client-go credential plugin that is not installed.
To learn more about this feature, consult the documentation available at:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

You can also append your custom aws cli installation path to the $PATH variable in ~/.bash_profile: export PATH=$PATH:<path to aws cli program directory>. This way you do not need to sed the kubeconfig file every time you add an EKS cluster. Also you will be able to use aws command at the command prompt without specifying full path to the program for every execution.

I had this problem when installing kubectx on Ubuntu Linux via a Snap package. It does not seem to be able to access the AWS CLI then. I worked around the issue by removing the Snap package and just using the shell scripts instead.

It seems that in ~/.kube/config the command: aws doesn't use the PATH environment and doesn't find it. Here is how to change it to the full path:
sed -e "/command: aws/s?aws?$(which aws)?" -i ~/.kube/config

Related

Is the update-kubeconfig command a client-only command or does it affect the cluster

I get the following warning/message when I run some k8s related commands
Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update
and then I know I should run the command like so:
aws eks update-kubeconfig --name cluster_name --dry-run
I think the potential change will be client-side only and will not cause any change on the server side - the actual cluster. I just wanted some verification of this, or otherwise. Many thanks
Yes, update-kubeconfig does not make any changes to the cluster. It will only update your local .kube/config file with the cluster info. Note that with the --dry-run flag, no change will be made at all - the resulting configuration will just be printed to stdout.

Pre-provided parameters to command with required prompt in SHELL

So I have to make deployment of AWS Elastic Beanstalk application with AWSEB CLI on Jenkins. When i try to use command
eb init
It requires some information and credentials. Credentials are stored as parameters or could be a secret file on Jenkins instance. Command have no such things like --parameter to provide it at start. Is there any solution to provide all parameters in the code that in runtime this command will now okay now this is provided now this and so on? Something like this:
eb init --username XXX --password XXX --others XXX
Here is documentation for that command
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-configuration.html
Will this answer help your issue? It seems you can set some of the parameters as Environment variables and rest as flags.. For example.
$ export AWS_ACCESS_KEY_ID="xxx"
$ export AWS_SECRET_ACCESS_KEY="xxx"
$ export AWS_DEFAULT_REGION="xxx"
Then
eb init --region eu-west-1 --platform <platform-name> appname

We are in the process of configuring Kubectl to manage our cluster in Azure Kubernetes Service

1.Can we get azure kubectl(exe/bin) as downloadable link instead of installing it using command line 'az aks install-cli' or can I use the kubectl from kuberenetes?
2.Is there any azure-cli command to change the default location of kubeconfig file, As by default it is pointing to '.kube\config' in windows ?
Example : Instead of using '--kubeconfig' flag like 'kubectl get nodes --kubeconfig D:\config' can we able to change the default location ?
you can use the default kubernetes CLI (kubectl) and you can get credentials with the Azure CLI az aks get-credentials --resource-group <RESOURCE_GROUP> --name <AKS_CLUSTER_NAME> or over the Azure portal UI.
You can use the flag --kubeconfig=<PATH> or you can overwrite the default value of the variable KUBECONFIG that is pointing to $HOME/.kube/config with KUBECONFIG=<PATH>.
Example:
KUBECONFIG=D:\config
#or
export KUBECONFIG=D:\config

AWS ECS Exec fails on shell access to ECS Fargate task (Unable to start shell: Failed to start pty: fork/exec C:/Program: no such file or directory)

I am attempting to gain shell level access from a windows machine to a Linux ECS Task in an AWS Fargate Cluster via the AWS CLI (v2.1.38) through aws-vault.
The redacted command I am using is
aws-vault exec my-profile -- aws ecs execute-command --cluster
my-cluster-name --task my-task-id --interactive --command "/bin/sh"
but this fails with this output
The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
Starting session with SessionId: ecs-execute-command-0bc2d48dbb164e010
SessionId: ecs-execute-command-0bc2d48dbb164e010 :
----------ERROR-------
Unable to start shell: Failed to start pty: fork/exec C:/Program: no such file or directory
I can see that ECS Exec is enabled on this task because an aws describe shows the following.
It appears that its recognising the host is a windows machine and attempting to initialise based on a variable that is windows specific.
Is anyone able to suggest what I can do to resolve this.
Ran into the same error. Using --command "bash" worked for me on Windows 10.
I was using windows 7, I think without WSL (Windows 10+) or Linux (or Mac) it just doesn't work. There's another suggestion explained here that's not worth the trouble:
Cannot start an AWS ssm session on EC2 Amazon linux instance
For me, I just used a Linux bastion inside AWS and it worked from there.
Using a windows powershell to run this command worked for me
Ran into a similar issue. Not all docker containers required bash.
Try using:
--command "sh"

AWS ECR Get login command - No such file or directory via ansible

I have an ansible task as such
- name: Login to AWS
command: $(aws ecr get-login --no-include-email --region us-east-2)
On running this I get the output
FAILED! => {"changed": false, "cmd": "'^$(aws' ecr get-login
--no-include-email --region 'us-east-2)'", "msg": "[Errno 2] No such file or directory", "rc": 2}
What is the reason ? I believe this command should run fine
The $(command) construction you are using is “command substitution”. The shell runs the command, captures its output, and inserts that into the command line that contains the $(…). It is intended to be used from a shell command line to login to the ECS service.
Ansible is does not launch the command in shell mode. and cannot support shell command substitution in that context, and reports file not found as a result.
From
https://docs.ansible.com/ansible/2.5/modules/command_module.html
The given command will be executed on all selected nodes. It will not be processed through the shell, so variables like $HOME and operations like "<", ">", "|", ";" and "&" will not work (use the shell module if you need these features).
Instead, you could create a shell script that does the ECS login, and the process you need to run after you login.
Then call that script from the command parameter.
Please note, the documentation above does refer to a shell module that can be used. You should use that module if you need this type of command substitution.
There are several issues on the ansible issue tracker, and they all point to amazon-ecr-credential-helper as the solution.
Basically, you install the package via your normal package manager (e.g. apt install amazon-ecr-credential-helper for debian-based systems) and add configuration to the docker engine through credential helpers. This is usually put into the ~/.docker/config.json file. For this to work you have to have created/configured some means for the machine to authenticate with AWS. I personally use EC2 instance roles for this, but you can also use credentials, environment variables or something else that AWS provides for.
example ~/.docker/config.json:
{
"credHelpers": {
"<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com": "ecr-login"
}
}
Ansible directives to get there:
---
- name: Install docker.io + helper script for ecr
apt:
name: "{{ item }}"
state: latest
loop:
- docker.io
- amazon-ecr-credential-helper
- name: Creates directory
file:
path: /home/ubuntu/.docker/
state: directory
- name: add ECR configuration to docker repositories
copy: src=config.json dest=/home/ubuntu/.docker/config.json
rc is the return (exit) code, '2' in your case, of the command aws ecr get-login --no-include-email --region us-east-2
Below listed points may help to troubleshoot -
Verify if aws cli is installed correctly, using aws --version command.
As per the documentation, get-login command is available in the AWS CLI starting with version 1.9.15; while version 1.11.91 is recommended.
Try updating the aws version -
pip install awscli --upgrade --user