I'm playing around with AWS and my credentials worked few months back. I'm using credentials file located in ~/.aws/credentials
and using the keys provided by AWS. They updated the access key so I've changed it in the file but secret key remained the same.
I've got the credentials file in this format:
[default]
aws_access_key_id=xyz
aws_secret_access_key=xyz
region=eu-west-2
vpc-id=xyz
when I run docker-machine create --driver amazonec2 testdriven-prod
I get this output:
Error setting machine configuration from flags provided: amazonec2 driver requires AWS credentials configured with the --amazonec2-access-key and --amazonec2-secret-key options, environment variables, ~/.aws/credentials, or an instance role
The file is in the right directory though. Why Docker-machine can't see it ? I really don't understand this error.
What can I try to resolve this ?
This isn't a real answer rather a find.
I used verbose cli command to create the instance and it worked. Even though
this:
docker-machine create --driver amazonec2 --amazonec2-access-key XYZ --amazonec2-secret-key XYZ --amazonec2-open-port 8000 --amazonec2-region eu-west-2 testdriven-prod
should be equivalent to:
aws_access_key_id=XYZ
aws_secret_access_key=XYZ
region=eu-west-2
in ~/.aws/credentials file the behaviour was different.
So if anyone is still interested in sharing what the real answer to this might
be please feel free to post it.
Related
I'm about to deploy Docker container on AWS with credential file formatted like this:
[default]
aws_access_key_id = KEY
aws_secret_access_key = KEY
region=eu-west-2
vpc-id=vpc-bb1b7fd3
and located in ~/.aws/credentials
When I execute command docker-machine create --driver amazonec2 app
I get:
Couldn't determine your account Default VPC ID : "AuthFailure: AWS was not able to validate the provided access credentials\n\tstatus code: 401, request id: faf606d9-b12e-4a9e-a6c5-18eb609ffc45"
Error setting machine configuration from flags provided: amazonec2 driver requires either the --amazonec2-subnet-id or --amazonec2-vpc-id option or an AWS Account with a default vpc-id
Default VPC-ID is already defined. Anyone can help to resolve this or point me in the right direction ?
Command I'm using
docker-machine create --driver amazonec2 --amazonec2-access-key AKIAyyy --amazonec2-secret-key AKIAxxx --amazonec2-region eu-west-2 --amazonec2-vpc-id vpc-bb1b7fd3 flask_app
and when I'm trying to use credentials file located in my file system:
docker-machine create --driver amazonec2 flask_app
where vpc-bb1b7fd3 was generated by AWS by default hence must be valid and time is correct too. I also tried to swap the keys in case I somehow managed to swap them but they're OK too. Output from sudo ntpdate ntp.ubuntu.com was identical with machine's system time.
Error says: Error with pre-create check: "AuthFailure: AWS was not able to validate the provided access credentials\n\tstatus code: 401, request id: 9d642d91-cd93-4104-b9fb-2a42b1249e3b"
Tried:
On Stack Exchange was very similar problem solved by restarting Docker daemon because Docker's clock stops syncing its time with computer's time when computer is in sleep and awaken again. I restarted Docker daemon with no change. Still the same error.
Problem solved by downloading rootkey.csv from AWS and moving it into ~/.aws
Docker instance is now uploaded onto AWS.
The issue is not with keys, so possible two reason
Your system Time is wrong
Invalid VPC ID
You should check your computer's clock maybe it's wrong, even though it it set to update "automatically from the internet." Try to Run the following will fix the computer's clock
sudo ntpdate ntp.ubuntu.com
Or run accordingly to your OS.
AWS was not able to validate the provided access credentials
The second reason seems like you are missing some flags in your command if time does not work then pls update the question with command.
VPC ID
We determine your default VPC ID at the start of a command. In some
cases, either because your account does not have a default vpc, or you
don’t want to use the default one, you can specify a vpc with the
--amazonec2-vpc-id flag.
Login to the AWS console Go to Services -> VPC -> Your VPCs. Locate
the VPC ID you want from the VPC column. Go to Services -> VPC ->
Subnets. Examine the Availability Zone column to verify that zone a
exists and matches your VPC ID. For example, us-east1-a is in the a
availability zone. If the a zone is not present, you can create a new
subnet in that zone or specify a different zone when you create the
machine.
To create a machine with a non-default VPC-ID:
docker-machine create --driver amazonec2 --amazonec2-access-key AKI******* --amazonec2-secret-key 8T93C********* --amazonec2-vpc-id vpc-****** aws02
This example assumes the VPC ID was found in the a availability zone. Use the--amazonec2-zone flag to specify a zone other than the a zone. For example,--amazonec2-zone c signifies us-east1-c.
docker-machine-with-aws-driver-amazon-web-services-from-docker-documentation
On my Ubuntu 18.04.02 LTS I have docker, docker-machine and docker-compose:
Docker version 18.06.1-ce, build e68fc7a
docker-machine version 0.15.0, build b48dc28
docker-compose version 1.22.0, build unknown
I am following the testdriven.io microservices tutorials but I am stuck at part one - deployment. Unfortunately, it does not offer any help setting up the AWS part.
I have created an .aws/credentials file in the home folder of the user I am using by using the aws configure command and this worked.
But when running the command docker-machine create --driver amazonec2 testdriven-prod I get the following error:
Error setting machine configuration from flags provided: amazonec2 driver requires AWS credentials configured with the --amazonec2-access-key and --amazonec2-secret-key options, environment variables, ~/.aws/credentials, or an instance role
Everything seems to work when using the command line parameters but I think I should be able to use the credentials file as well.
I have regenerated the credentials a couple of times and the credentials file as well but to no avail.
Sdev#udev01:~$ ls .aws
config credentials
dev#dev01:~$ docker-machine create --driver amazonec2 testdriven-prod
Error setting machine configuration from flags provided: amazonec2 driver requires AWS credentials configured with the --amazonec2-access-key and --amazonec2-secret-key options, environment variables, ~/.aws/credentials, or an instance role
I am trying to configure AWS CONFIGURE via AWS CLI on my laptop having Windows 10 professional. This is not the first time I am configuring AWS CONFIGURE. I already have many profiles settings.
I have enough free system memory and storage and have sufficient rights to run aws configure. I am using Python 3.6
Here is the detail, how I am trying to setup AWS CONFIGURE.
When I run this command again, it asks all the values again.
Even if I run an AWS CLI command using this new profile; to create a lambda function ( aws lambda create-function xxxxxxxxx --profile lambdaprofile ).
It gives below error.
The config profile (lambdaprofile) could not be found.
Please help me.
Some times this issue happens and AWS CLI is unable to set new profile and settings in config file.
Here is a fix of this issue.
Run below command from windows console.
notepad %USERPROFILE%\.aws\credentials
You will see last line of credentials file with overlapping line.
e.g in your particular case, it would be showing like below.
region = us-east-1[lambdaprofile]
aws_access_key_id = AKIAIGCOZJBAKIAIGCOZJB
aws_secret_access_key = gHZWwhUxRLtwQRUknGgHZWwhUxRLtwQRUknG
region = use-east-1
Similar issue would be in config file, which can be checked by opening this file.
notepad %USERPROFILE%\.aws\config
To fix this issue set [lambdaprofile] to new line, preferably add another empty line before [lambdaprofile]. It should look like below.
region = us-east-1
[lambdaprofile]
aws_access_key_id = AKIAIGCOZJBAKIAIGCOZJB
aws_secret_access_key = gHZWwhUxRLtwQRUknGgHZWwhUxRLtwQRUknG
region = use-east-1
Also do the same fix in config file. After fixing it, if you will run aws configure --profile lambdaprofile it should show the previously saved values in credentials and config files.
You may also check if values are saved or not with below command.
aws configure list --profile lambdaprofile
Alternately to the fix detail mentioned above, you can also set new profile directly with AWS CONFIGURE SET
e.g in your particular case.
aws configure --profile lambdaprofile set aws_access_key_id AKIAIGCOZJBAKIAIGCOZJB
aws configure --profile lambdaprofile set aws_secret_access_key gHZWwhUxRLtwQRUknGgHZWwhUxRLtwQRUknG
aws configure --profile lambdaprofile set region use-east-1
or
aws configure set profile.lambdaprofile.aws_access_key_id AKIAIGCOZJBAKIAIGCOZJB
aws configure set profile.lambdaprofile.aws_secret_access_key gHZWwhUxRLtwQRUknGgHZWwhUxRLtwQRUknG
aws configure set profile.lambdaprofile.region use-east-1
I am using Ansible's Dynamic Inventory feature to connect to the ec2 instances in AWS account using below:
AWS_PROFILE=personal ansible-playbook cifarm.yml -C
I have copied the https://raw.github.com/ansible/ansible/devel/contrib/inventory/ec2.py into and https://raw.github.com/ansible/ansible/devel/contrib/inventory/ec2.ini file under inventory directory.
On running AWS_PROFILE=personal ansible-playbook cifarm.yml -C, it throws below error:
Output:
ERROR: Inventory script (inventory/ec2.py) had an execution error: ERROR: "Authentication error retrieving ec2 inventory.
- AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment vars found but may not be correct
- Boto configs found at '~/.boto, ~/.aws/credentials', but the credentials contained may not be correct", while: getting EC2 instances
I am running the playbook from a MAC OS. Please note that I am able to run below successfully:
aws ec2 describe-instances --page-size 5 --profile personal
This prooves that the credentials are correct and also I have exported the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
I tried to execute the inventory/ec2.py file as below:
./ec2.py --list
And it comes back with the same error. I saw a post where it was suggested to update the time on the machine, I am running the playbook. So I ran ntpdate -u
But the issue still persists. I have no idea what is the reason.
Any help/suggestions is much appreciated.
I managed to resolve the issue. Explaining it here for others to benefit. When I generated the aws credentials for AWS_PROFILE= personal, I had values for below environment variables in ~/.aws/credentials:
aws_access_key_id
aws_secret_access_key
aws_session_token.
However if you look in the ec2.py, it expects variable value called aws_security_token. So all I did was changed the variable name from aws_session_token to aws_security_token in ~/.aws/credentials.
And voila. Works fine.
For a month or so, I've been studying AWS services and now I have to accomplish some basic stuff on AWS elastic beanstalk via command line. As far as I understand there are the aws elasticbeanstalk [command] and the eb [command] CLI installed on the build instance.
When I run eb status inside application folder, I get response in the form:
Environment details for: app-name
Application name: app-name
Region: us-east-1
Deployed Version: app-version
Environment ID: env-name
Platform: 64bit Amazon Linux ........
Tier: WebServer-Standard
CNAME: app-name.elasticbeanstalk.com
Updated: 2016-07-14 .......
Status: Ready
Health: Green
That tells me eb init has been run for the application.
On the other hand if I run:
aws elasticbeanstalk describe-application-versions --application-name app-name --region us-east-1
I get the error:
Unable to locate credentials. You can configure credentials by running "aws configure".
In home folder of current user there is a .aws directory with a credential file containing a [profile] line and aws_access_key_id and
aws_secret_access_key lines all set up.
Beside the obvious problem with the credentials, what I really lack is understanding of the two cli. Why is EB cli not asking for credentials and AWS cli is? When do I use one or the other? Can I use only aws cli? Any clarification on the matter will be highly appreciated.
EDIT:
For anyone ending up here, having the same problem with "Unable to locate credentials". Adding --profile profile-name option solved the problem for me. profile-name can be found in ~/.aws/config (or credentials) file on [profile profile-name] line.
In order to verify that the AWS CLI is configured on your system run aws configure and provide it with all the details it requires. That should fix your credentials problem and checking the change in configuration will allow you to understand what's wrong with your current conf.
the eb cli and the aws cli have very similar capabilities, and I too am a bit confused as to why they both should exist. From my experience the main differences are that the cli is used to interact with your AWS account using simple requests while the eb cli creates connections between you and the eb envs and so allows for finer control over them.
For instance - I've just developed a CI/CD pipeline for our beanstalk apps. When I use the eb cli I can monitor the deployment of our apps and notify the developers when it's finished. aws cli does not offer that functionality, and the only to achieve that is to repeatedly query the service until you receive the desired result.
The AWS CLI is a general tool that works on all AWS resources. It's not tied to a specific software project, the type of machine you're on, the directory you're in, or anything like that. It only needs credentials, whether they've been put there manually if it's your own machine, or generated by AWS if it's an EC2 instance.
The EB CLI is a high level tool to wrangle your software project into place. It's tied to the directory you're in, it assumes that the stuff in your directory is your project, and it has short commands that do a lot of background work to magically put everything in the right place.