On my Ubuntu 18.04.02 LTS I have docker, docker-machine and docker-compose:
Docker version 18.06.1-ce, build e68fc7a
docker-machine version 0.15.0, build b48dc28
docker-compose version 1.22.0, build unknown
I am following the testdriven.io microservices tutorials but I am stuck at part one - deployment. Unfortunately, it does not offer any help setting up the AWS part.
I have created an .aws/credentials file in the home folder of the user I am using by using the aws configure command and this worked.
But when running the command docker-machine create --driver amazonec2 testdriven-prod I get the following error:
Error setting machine configuration from flags provided: amazonec2 driver requires AWS credentials configured with the --amazonec2-access-key and --amazonec2-secret-key options, environment variables, ~/.aws/credentials, or an instance role
Everything seems to work when using the command line parameters but I think I should be able to use the credentials file as well.
I have regenerated the credentials a couple of times and the credentials file as well but to no avail.
Sdev#udev01:~$ ls .aws
config credentials
dev#dev01:~$ docker-machine create --driver amazonec2 testdriven-prod
Error setting machine configuration from flags provided: amazonec2 driver requires AWS credentials configured with the --amazonec2-access-key and --amazonec2-secret-key options, environment variables, ~/.aws/credentials, or an instance role
Related
after setting up my cluster tried to connect to my cluster. test everything is fine. but getting below error.
command i executed:
kubectl get svc
Error i get:
Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/client/auth/exec/exec.go:62"
Related to this
https://github.com/kubernetes/kubectl/issues/1210.
https://github.com/aws/aws-cli/issues/6920.
Try updating your aws-cli and kubectl.
This issue occured for me after I upgraded my local Docker Desktop to latest version 4.12.0 (85629). As this version was causing problems while running kubctl commands to update my feature branch Hoard image, I did following steps to resolve them.
I updated my local config file under C:/Users/vvancha/.kube by replacing v1alpha1 to v1beta1
And I took the latest version of k9s from https://github.com/derailed/k9s/releases . I took the latest as of now is https://github.com/derailed/k9s/releases/download/v0.26.7/k9s_Windows_x86_64.tar.gz
I updated my AWS CLI to latest of CLI2 version by command in my local
Run cmd, msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi
confirmed that my version is aws-cli/2.8.3 Python/3.9.11 Windows/10 exe/AMD64 prompt/off
I updated my STS client pointing to my required role
Run command to update kubernate
aws --region us-east-1 eks update-kubeconfig --name dma-dmpreguse1 --alias dmpreguse1 <change as per your need
Open your k9s and Verify it .
Now I am able to update my required changes.
I'm playing around with AWS and my credentials worked few months back. I'm using credentials file located in ~/.aws/credentials
and using the keys provided by AWS. They updated the access key so I've changed it in the file but secret key remained the same.
I've got the credentials file in this format:
[default]
aws_access_key_id=xyz
aws_secret_access_key=xyz
region=eu-west-2
vpc-id=xyz
when I run docker-machine create --driver amazonec2 testdriven-prod
I get this output:
Error setting machine configuration from flags provided: amazonec2 driver requires AWS credentials configured with the --amazonec2-access-key and --amazonec2-secret-key options, environment variables, ~/.aws/credentials, or an instance role
The file is in the right directory though. Why Docker-machine can't see it ? I really don't understand this error.
What can I try to resolve this ?
This isn't a real answer rather a find.
I used verbose cli command to create the instance and it worked. Even though
this:
docker-machine create --driver amazonec2 --amazonec2-access-key XYZ --amazonec2-secret-key XYZ --amazonec2-open-port 8000 --amazonec2-region eu-west-2 testdriven-prod
should be equivalent to:
aws_access_key_id=XYZ
aws_secret_access_key=XYZ
region=eu-west-2
in ~/.aws/credentials file the behaviour was different.
So if anyone is still interested in sharing what the real answer to this might
be please feel free to post it.
I am actually trying to deploy my application using Kubernetes in the AWS Kops. For this i followed the steps given in the AWS workshop tutorial.
https://github.com/aws-samples/aws-workshop-for-kubernetes/tree/master/01-path-basics/101-start-here
I created a AWS Cloud9 environment by logging in as a IAM user and installed kops and other required software's as well. When i try to create the cluster using the following command
kops create cluster --name cs.cluster.k8s.local --zones $AWS_AVAILABILITY_ZONES
--yes
i get an error like below in the cloud9 IDE
error running tasks: deadline exceeded executing task IAMRole/nodes.cs.cluster.k8s.local. Example error: error creating IAMRole: InvalidClientTokenId: The security token included in the request is invalid
status code: 403, request id: 30fe2a97-0fc4-11e8-8c48-0f8441e73bc3
I am not able to find a way to solve this issue. Any help on this would be appreciable.
I found the issue and fixed it. Actually
I did not export the following 2 environment variables in the terminal where I am running create cluster. These 2 below variables are required while creating a cluster using kops
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
I had a Jenkins 2.46 installation running on an EC2 box, associated to a IAM role through an instance profile.
Jenkins was able to do various tasks requiring AWS credentials (f.e. use terraform, upload files to s3, access CodeCommit git repos) using just the instance profile role (no access key or secret keys were stored on the instance).
After upgrading to Jenkins 2.89, this is no longer the case: every task requiring authentication with AWS fails with a 403 error.
However, running a command on the instance bash as the jenkins user still works fine (f.e. running sudo -u jenkins /usr/bin/aws s3 ls s3://my-bucket/ lists bucket files; running the same command into Jenkins' Script Console yelds a 403).
I read the release notes of every version from 2.46 to 2.89 but I did not find anything relevant.
Jenkins was installed and updated through yum, the aws cli was installed using the bundled installer provided by AWS.
I have Ansible Master running on an ubuntu ec2 server with IAM role having full permission on Ec2 and nothing else. All the instances deployed using this Ansible-master are although deployed but in terminated state.
Albiet, while I was testing another approach, I created a new master and provided my authentication keys which are of a root user having all the permissions.
Is there a problem with IAM role's permissions or deployment is known not to work with IAM roles?
It works as expected for me:
root#test-node:~# cat /etc/issue
Ubuntu 14.04.4 LTS \n \l
root#test-node:~# ansible --version
ansible 2.1.2.0
config file =
configured module search path = Default w/o overrides
root#test-node:~# pip list | grep boto
boto (2.42.0)
If no credentials are specified in env variables or config files, Boto (library that Ansible uses to connect to AWS) will try to fetch credentials from instance metadata.
You may try to fetch them manually with:
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name>
and pass KeyId and Secret to Ansible via environment variables to test that role's permissions are correct.
Keep in mind, that:
IAM role should be attached to the EC2 instance before start
region should be always defined: either via module parameters or via environment variable.