after setting up my cluster tried to connect to my cluster. test everything is fine. but getting below error.
command i executed:
kubectl get svc
Error i get:
Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/client/auth/exec/exec.go:62"
Related to this
https://github.com/kubernetes/kubectl/issues/1210.
https://github.com/aws/aws-cli/issues/6920.
Try updating your aws-cli and kubectl.
This issue occured for me after I upgraded my local Docker Desktop to latest version 4.12.0 (85629). As this version was causing problems while running kubctl commands to update my feature branch Hoard image, I did following steps to resolve them.
I updated my local config file under C:/Users/vvancha/.kube by replacing v1alpha1 to v1beta1
And I took the latest version of k9s from https://github.com/derailed/k9s/releases . I took the latest as of now is https://github.com/derailed/k9s/releases/download/v0.26.7/k9s_Windows_x86_64.tar.gz
I updated my AWS CLI to latest of CLI2 version by command in my local
Run cmd, msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi
confirmed that my version is aws-cli/2.8.3 Python/3.9.11 Windows/10 exe/AMD64 prompt/off
I updated my STS client pointing to my required role
Run command to update kubernate
aws --region us-east-1 eks update-kubeconfig --name dma-dmpreguse1 --alias dmpreguse1 <change as per your need
Open your k9s and Verify it .
Now I am able to update my required changes.
Related
I defined my KUBECONFIG for the AWS EKS cluster:
aws eks update-kubeconfig --region eu-west-1 --name yb-demo
but got the following error when using kubectl:
...
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
[opc#C eks]$ kubectl get sc
Unable to connect to the server: getting credentials: exec: executable aws not found
It looks like you are trying to use a client-go credential plugin that is not installed.
To learn more about this feature, consult the documentation available at:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
You can also append your custom aws cli installation path to the $PATH variable in ~/.bash_profile: export PATH=$PATH:<path to aws cli program directory>. This way you do not need to sed the kubeconfig file every time you add an EKS cluster. Also you will be able to use aws command at the command prompt without specifying full path to the program for every execution.
I had this problem when installing kubectx on Ubuntu Linux via a Snap package. It does not seem to be able to access the AWS CLI then. I worked around the issue by removing the Snap package and just using the shell scripts instead.
It seems that in ~/.kube/config the command: aws doesn't use the PATH environment and doesn't find it. Here is how to change it to the full path:
sed -e "/command: aws/s?aws?$(which aws)?" -i ~/.kube/config
I am attempting to gain shell level access from a windows machine to a Linux ECS Task in an AWS Fargate Cluster via the AWS CLI (v2.1.38) through aws-vault.
The redacted command I am using is
aws-vault exec my-profile -- aws ecs execute-command --cluster
my-cluster-name --task my-task-id --interactive --command "/bin/sh"
but this fails with this output
The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
Starting session with SessionId: ecs-execute-command-0bc2d48dbb164e010
SessionId: ecs-execute-command-0bc2d48dbb164e010 :
----------ERROR-------
Unable to start shell: Failed to start pty: fork/exec C:/Program: no such file or directory
I can see that ECS Exec is enabled on this task because an aws describe shows the following.
It appears that its recognising the host is a windows machine and attempting to initialise based on a variable that is windows specific.
Is anyone able to suggest what I can do to resolve this.
Ran into the same error. Using --command "bash" worked for me on Windows 10.
I was using windows 7, I think without WSL (Windows 10+) or Linux (or Mac) it just doesn't work. There's another suggestion explained here that's not worth the trouble:
Cannot start an AWS ssm session on EC2 Amazon linux instance
For me, I just used a Linux bastion inside AWS and it worked from there.
Using a windows powershell to run this command worked for me
Ran into a similar issue. Not all docker containers required bash.
Try using:
--command "sh"
I'm trying to log in to AWS ECR with the Docker login command. I can get a password with the AWS CLI with the command aws ecr get-login-password but when piping this into the docker login command I get the following error:
Error saving credentials: error storing credentials - err: exit status 1, out: `not implemented`
The command I am running is the one recommended in the AWS ECR documentation:
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin account_id_redacted.dkr.ecr.us-east-1.amazonaws.com/blog-project
I'm running the latest version of AWS CLI as of this question, 2.0.57.
I'm running Docker version 2.4.0 on macOS 10.14.6
Has anyone else run into this issue, and if so have they found a solution?
I've definitely achieved this in the past, but I wonder if there is an issue between the latest versions of Docker and the AWS CLI...
I'm not 100% sure what the issue was here, but it was something to do with the Docker credentials helper.
I installed the Docker credentials helper for macOS, changed the credsStore parameter in ~/.docker/config.json to osxkeychain. That fixed the issues.
I had similar issue, seems like my ~/.docker/config.json was totally messed after work with multiple repos / hubs.
So I just wiped out all the content in this file leaving it empty and rerun aws ecr get-login-password | docker login ... which automatically populated config with appropriate values.
I had this issue on macOS from
.docker/config.json
remove
"credsStore" : "ecr-login"
This resolved the issue for me
if anybody has the same problem on windows then go to C:\Users folder and in the .docker folder remove the config.json file.
it might fix your problem
I believe this is the intended result (sorta). The point of using amazon-ecr-credential-helper is to not need to use docker login. You should instead configure the AWS CLI with your profile credentials (mine: myprofile). Then, you would just need to slightly modify your scripts.
For example, in ECR the AWS given steps to upload a docker image are:
Retrieve an authentication token and authenticate your Docker client
to your registry. Use the AWS CLI:
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com
Note: If you receive an error using the AWS CLI, make sure that you have the latest version of
the AWS CLI and Docker installed.
Build your Docker image using the
following command. For information on building a Docker file from
scratch see the instructions here . You can skip this step if your
image is already built:
docker build -t toy_project .
After the build completes, tag your
image so you can push the image to this repository:
docker tag toy_project:latest XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/toy_project:latest
Run the following command to push this image to your newly created AWS
repository:
docker push XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/toy_project:latest
However, you would want to skip step 1. The reason is that if you configured aws cli (i.e. aws configure --profile myprofile) then your credentials will be stored. So you can skip to step 2.
On the 4th step, you simply need to add AWS_PROFILE, just like below
AWS_PROFILE=myprofile docker push XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/toy_project:latest`
With amazon-ecr-credential-helper, you no longer need to use docker login or worry about storing credentials, that is the point of amazon-ecr-credential-helper. However, this may not be the best solution for you if you need to actively use docker login in your scripts.
Note: my ~/.docker/config.json looks like
{
"credsStore": "ecr-login"
}
I was getting the same error while running this command on MacOS.
Error possibly occurred because that particular location didn't have the appropriate permissions for users read/write/execute.
Also while I was doing
% docker ps
It was giving an error as: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
What I did:
% sudo chmod 777 /var/run/docker.sock
This gave all the required permissions to that location.
Hope it would help!
I have a service created on Google Cloud run that I am able to deploy manually through the Google Cloud Console UI using an image on Container registry. But deployment from CLI is failing. Here is the command I am using and the error I get. I am not able to understand what I am missing:
$ gcloud beta run deploy service-name --platform managed --region region-name --image image-url
Deploying container to Cloud Run service [service-name] in project [project-name] region [region-name]
X Deploying...
. Creating Revision...
. Routing traffic...
Deployment failed
ERROR: (gcloud.beta.run.deploy) INVALID_ARGUMENT: The request has errors
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: spec.revisionTemplate.spec.container.ports should be empty
field: spec.revisionTemplate.spec.container.ports
Update 1:
I have updated the SDK using gcloud components update, but I still have the same issue
Here's my SDK Version
$gcloud version
Google Cloud SDK 270.0.0
beta 2019.05.17
bq 2.0.49
core 2019.11.04
gsutil 4.46
I am using a multistage docker build. Here's my Dockerfile:
FROM custom-dev-image
COPY . /project_dir
WORKDIR /project_dir
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
/usr/local/bin/go build -a \
-ldflags '-w -extldflags "-static"' \
-o /root/go/bin/executable ./cmds/project/main.go
FROM alpine:3.10
ENV GIN_MODE=release APP_NAME=project_name
COPY --from=0 /root/go/bin/executable /usr/local/bin/
CMD executable
I had this same problem and I assume it was because I had older Cloud Run deployment that was created before I had ran gcloud components update since some update.
I was able to fix it by deleting the whole Cloud Run service (through the GUI) and deploying it from scratch again (via terminal). I noticed that the ports: definition disappeared from the YAML once I did this.
After this I could do deployments normally.
This was a bug in Cloud Run. It has been fixed and deploying with CLI is working for me now. Here's the link to the issue I had raised with Google Cloud which has a response from them https://issuetracker.google.com/issues/144069696.
I am actually trying to deploy my application using Kubernetes in the AWS Kops. For this i followed the steps given in the AWS workshop tutorial.
https://github.com/aws-samples/aws-workshop-for-kubernetes/tree/master/01-path-basics/101-start-here
I created a AWS Cloud9 environment by logging in as a IAM user and installed kops and other required software's as well. When i try to create the cluster using the following command
kops create cluster --name cs.cluster.k8s.local --zones $AWS_AVAILABILITY_ZONES
--yes
i get an error like below in the cloud9 IDE
error running tasks: deadline exceeded executing task IAMRole/nodes.cs.cluster.k8s.local. Example error: error creating IAMRole: InvalidClientTokenId: The security token included in the request is invalid
status code: 403, request id: 30fe2a97-0fc4-11e8-8c48-0f8441e73bc3
I am not able to find a way to solve this issue. Any help on this would be appreciable.
I found the issue and fixed it. Actually
I did not export the following 2 environment variables in the terminal where I am running create cluster. These 2 below variables are required while creating a cluster using kops
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)