I'm about to deploy Docker container on AWS with credential file formatted like this:
[default]
aws_access_key_id = KEY
aws_secret_access_key = KEY
region=eu-west-2
vpc-id=vpc-bb1b7fd3
and located in ~/.aws/credentials
When I execute command docker-machine create --driver amazonec2 app
I get:
Couldn't determine your account Default VPC ID : "AuthFailure: AWS was not able to validate the provided access credentials\n\tstatus code: 401, request id: faf606d9-b12e-4a9e-a6c5-18eb609ffc45"
Error setting machine configuration from flags provided: amazonec2 driver requires either the --amazonec2-subnet-id or --amazonec2-vpc-id option or an AWS Account with a default vpc-id
Default VPC-ID is already defined. Anyone can help to resolve this or point me in the right direction ?
Command I'm using
docker-machine create --driver amazonec2 --amazonec2-access-key AKIAyyy --amazonec2-secret-key AKIAxxx --amazonec2-region eu-west-2 --amazonec2-vpc-id vpc-bb1b7fd3 flask_app
and when I'm trying to use credentials file located in my file system:
docker-machine create --driver amazonec2 flask_app
where vpc-bb1b7fd3 was generated by AWS by default hence must be valid and time is correct too. I also tried to swap the keys in case I somehow managed to swap them but they're OK too. Output from sudo ntpdate ntp.ubuntu.com was identical with machine's system time.
Error says: Error with pre-create check: "AuthFailure: AWS was not able to validate the provided access credentials\n\tstatus code: 401, request id: 9d642d91-cd93-4104-b9fb-2a42b1249e3b"
Tried:
On Stack Exchange was very similar problem solved by restarting Docker daemon because Docker's clock stops syncing its time with computer's time when computer is in sleep and awaken again. I restarted Docker daemon with no change. Still the same error.
Problem solved by downloading rootkey.csv from AWS and moving it into ~/.aws
Docker instance is now uploaded onto AWS.
The issue is not with keys, so possible two reason
Your system Time is wrong
Invalid VPC ID
You should check your computer's clock maybe it's wrong, even though it it set to update "automatically from the internet." Try to Run the following will fix the computer's clock
sudo ntpdate ntp.ubuntu.com
Or run accordingly to your OS.
AWS was not able to validate the provided access credentials
The second reason seems like you are missing some flags in your command if time does not work then pls update the question with command.
VPC ID
We determine your default VPC ID at the start of a command. In some
cases, either because your account does not have a default vpc, or you
don’t want to use the default one, you can specify a vpc with the
--amazonec2-vpc-id flag.
Login to the AWS console Go to Services -> VPC -> Your VPCs. Locate
the VPC ID you want from the VPC column. Go to Services -> VPC ->
Subnets. Examine the Availability Zone column to verify that zone a
exists and matches your VPC ID. For example, us-east1-a is in the a
availability zone. If the a zone is not present, you can create a new
subnet in that zone or specify a different zone when you create the
machine.
To create a machine with a non-default VPC-ID:
docker-machine create --driver amazonec2 --amazonec2-access-key AKI******* --amazonec2-secret-key 8T93C********* --amazonec2-vpc-id vpc-****** aws02
This example assumes the VPC ID was found in the a availability zone. Use the--amazonec2-zone flag to specify a zone other than the a zone. For example,--amazonec2-zone c signifies us-east1-c.
docker-machine-with-aws-driver-amazon-web-services-from-docker-documentation
Related
Is there an AWS API to hit and get the availability zone of the current machine where the code is running? I tried to check if such ENV variable is set automatically by AWS but couldn't find such a thing. Any help would be appreciated.
You can get the current AZ of the instance using Instance metadata.
For example:
AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
echo ${AZ}
will output (example):
us-east-1e
I recently spun up a t2.micro image and I want to install neo4j on it. I started with the instructions at https://neo4j.com/developer/neo4j-cloud-aws-ec2-ami/. But I got to the step for creating a security group and I received an error that a region needed to be supplied. Here is the command I used:
aws ec2 create-security-group \
--group-name $GROUP \
--description "Neo4j security group"
The error message was
You must specify a region. You can also configure your region by running "aws configure".
When I run this command I get prompted by a lot of stuff that don't seem related to region? Not only am I prompted for values that I don't know where/how to get them, when I am prompted for the region I am not sure the format to enter the region. So my question is how to I configure a security group so I can move on to installing neo4j on this instance?
There are still several steps to follow to install neo4j, but I seem to be tripped up on this step.
The commands expect a default region under ~/.aws/config
[default]
region=us-west-2
output=json
On the link that you have shared, there is a step to "Configure the AWS CLI with Your Credentials". This step allows you setup aws profile(s) and as part of those profiles, you can set a region.
Follow this link to understand how you can setup your aws profile with credentials and region details
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
Hope it helps
I'm playing around with AWS and my credentials worked few months back. I'm using credentials file located in ~/.aws/credentials
and using the keys provided by AWS. They updated the access key so I've changed it in the file but secret key remained the same.
I've got the credentials file in this format:
[default]
aws_access_key_id=xyz
aws_secret_access_key=xyz
region=eu-west-2
vpc-id=xyz
when I run docker-machine create --driver amazonec2 testdriven-prod
I get this output:
Error setting machine configuration from flags provided: amazonec2 driver requires AWS credentials configured with the --amazonec2-access-key and --amazonec2-secret-key options, environment variables, ~/.aws/credentials, or an instance role
The file is in the right directory though. Why Docker-machine can't see it ? I really don't understand this error.
What can I try to resolve this ?
This isn't a real answer rather a find.
I used verbose cli command to create the instance and it worked. Even though
this:
docker-machine create --driver amazonec2 --amazonec2-access-key XYZ --amazonec2-secret-key XYZ --amazonec2-open-port 8000 --amazonec2-region eu-west-2 testdriven-prod
should be equivalent to:
aws_access_key_id=XYZ
aws_secret_access_key=XYZ
region=eu-west-2
in ~/.aws/credentials file the behaviour was different.
So if anyone is still interested in sharing what the real answer to this might
be please feel free to post it.
I'm trying to run a Spark cluster on AWS using https://github.com/amplab/spark-ec2.
I've generated a key and and login credentials, and I'm using this command:
./spark-ec2 --key-pair=octavianKey4 --identity-file=credentials3.csv --region=eu-west-1 --zone=eu-west-1c launch my-instance-name
However, I keep getting this:
Warning: SSH connection error. (This could be temporary.)
Host: mec2-myHostNumber.eu-west-1.compute.amazonaws.com
SSH return code: 255
SSH output: Warning: Permanently added 'ec2-myHostNumber.eu-west-1.compute.amazonaws.com,myHostNumber' (ECDSA) to the list of known hosts.
Permission denied (publickey).
If I quit the console and then try to start the cluster again, I get this:
Setting up security groups...
Searching for existing cluster my-instance-name in region eu-west-1...
Found 1 master, 1 slave.
ERROR: There are already instances running in group my-instance-name-master or my-instance-name-slaves
The command is incorrect. Key pair name should be the one you mention in AWS. Identity file is .pem file associated. You can't ssh into a machine with AWS credentials (your csv file is credentials).
./spark-ec2 --key-pair=octavianKey4 --identity-file=octavianKey4.pem --region=eu-west-1 --zone=eu-west-1c launch my-instance-name
Can you add --resume to your spark-ec2 command and try? Your slave may not have the key. --resume will make sure it is transferred to the slave.
Running Spark on EC2
If one of your launches fails due to e.g. not having the right
permissions on your private key file, you can run launch with the
--resume option to restart the setup process on an existing cluster.
When following the tutorial instructions for connecting to my JobFlow in EMR, I type following:
./elastic-mapreduce --jobflow j-3FLVMX9CYE5L6 --ssh
and get this error:
Permission denied (publickey)
I'm already able to run other elastic-mapreduce commands just fine to create flows etc, so I'm assuming there's security settings required on the actual master instance for the flow, but nothing in the tutorial explains how to configure this (after all, I need to SSH into it to do the configuration in the first place!)
I found that I need to login as user "hadoop" using the EC2 keypair, and not any of the regular suspects (ec2-user, root, etc.) Like:
ssh -i privatekey.pem hadoop#masternode
Hope this is useful to someone.
Ok now I feel sheepish: I was using the Amazon CloudFront keypair from the my initial account setup rather than keypair associated with my account for accessing EC2 instances, accessible from EC2 > Network & Security > Key Pairs in the AWS Management Console.
The command "ssh -i privatekey.pem hadoop#masternode" worked great. The user "hadoop" must be used for "ec2 elastic mapreduce".