Can't login to docker with aws - amazon-web-services

This is an extension of my last question considering I've decided to deploy a Docker container onto a ton of EC2's. I've set up a repository and a user with full rights, and I added the correct keys to my aws cli configuration. When I try to run the docker login command that comes up after running the "aws ecr get-login" command, it gives me a failed with status: 403 forbidden error. I have absolutely no clue what's going on, and I've spent the past 2 days trying to fix this error... Any ideas?

I would suggest to check the security group of the EC2 Instance
To allow access via SSH you have to apply the following settings for the Security Group of the EC2 Instance:
Security Groups

Related

Unable to deploy code on ec2 instance using codedeploy

I have single ec2 instance running on ubuntu server and I am trying to implement CI/CD flow using codedeploy and source is bit-bucket.I jave also installed codedeploy-agent on ec2 instance and it is installed and running successfully but whenever I am deploying code on ec2 deployment is failing with an error shown below:
The overall deployment failed because too many individual instances failed deployment, too few
healthy instances are available for deployment, or some instances in your deployment group are
experiencing problems.
In the CodeDeploy agent log file that I am accessing using less /var/log/aws/codedeploy-agent/codedeploy-agent.log showing below error:
ERROR [codedeploy-agent(31598)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller:
Missing credentials - please check if this instance was started with an IAM instance profile
I am unable to understand how can I overcome this error someone let me know.
CodeDeploy agent requires IAM permissions provided by IAM role/profile of your instance. The exact permissions needed are given in AWS docs:
Step 4: Create an IAM instance profile for your Amazon EC2 instances

Is it possible to pull images from ECR without using docker login

I have an ECR and EC2 instance running docker. What I want to do is to pull images without doing docker login first.
Is it possible at all? If yes what kind of policy should I attach to EC2 instance and/or ECR repo? I did a lot of experiments, but did not succeed.
And please - no suggestions on how to use aws get-login. My aim is to get rid of it by using IAM policy/roles.
To use an EC2 Role without having to use docker login, https://github.com/awslabs/amazon-ecr-credential-helper can be used.
Place the docker-credential-ecr-login binary on your PATH and set the contents of your ~/.docker/config.json file to be:
{
"credsStore": "ecr-login"
}
Now commands such as docker pull or docker push will work transparently.
My aim is to get rid of it by using IAM policy/roles.
I don't see how this is possible since some form of authentication is required.

SSH connection error - Permission denied (publickey)

I'm trying to run a Spark cluster on AWS using https://github.com/amplab/spark-ec2.
I've generated a key and and login credentials, and I'm using this command:
./spark-ec2 --key-pair=octavianKey4 --identity-file=credentials3.csv --region=eu-west-1 --zone=eu-west-1c launch my-instance-name
However, I keep getting this:
Warning: SSH connection error. (This could be temporary.)
Host: mec2-myHostNumber.eu-west-1.compute.amazonaws.com
SSH return code: 255
SSH output: Warning: Permanently added 'ec2-myHostNumber.eu-west-1.compute.amazonaws.com,myHostNumber' (ECDSA) to the list of known hosts.
Permission denied (publickey).
If I quit the console and then try to start the cluster again, I get this:
Setting up security groups...
Searching for existing cluster my-instance-name in region eu-west-1...
Found 1 master, 1 slave.
ERROR: There are already instances running in group my-instance-name-master or my-instance-name-slaves
The command is incorrect. Key pair name should be the one you mention in AWS. Identity file is .pem file associated. You can't ssh into a machine with AWS credentials (your csv file is credentials).
./spark-ec2 --key-pair=octavianKey4 --identity-file=octavianKey4.pem --region=eu-west-1 --zone=eu-west-1c launch my-instance-name
Can you add --resume to your spark-ec2 command and try? Your slave may not have the key. --resume will make sure it is transferred to the slave.
Running Spark on EC2
If one of your launches fails due to e.g. not having the right
permissions on your private key file, you can run launch with the
--resume option to restart the setup process on an existing cluster.

IAM Role + Boto3 + Docker container

As far I as I know, boto3 will try to load credentials from the instance metadata service.
If I am running this code inside a EC2 instance I expected to hae no problem. But when my code is dockerized how the boto3 will find the metadata service?
The Amazon ECS agent populates the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable which can be used to get credentials. These special variables are provided only to process with PID 1. Script that is specified in Dockerfile ENTRYPOINT gets PID 1.
There are many networking modes and details might differ for other networking modes. More information can be found in: How can I configure IAM task roles in Amazon ECS to avoid "Access Denied" errors?
For awsvpc networking mode If you would run printenv with PID 1 you would see something similar to this:
AWS_EXECUTION_ENV=AWS_ECS_FARGATE
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/0f891318-ab05-46fe-8fac-d5113a1c2ecd
HOSTNAME=ip-172-17-0-123.ap-south-1.compute.internal
AWS_DEFAULT_REGION=ap-south-1
AWS_REGION=ap-south-1
ECS_CONTAINER_METADATA_URI_V4=http://169.254.170.2/v4/2c9107c385e04a70b30d3cc4d4de97e7-527074092
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/2c9107c385e04a70b30d3cc4d4de97e7-527074092
It also gets tricky to debug something since after SSH'ing into container you are using PID other than 1 meaning that services that need to get credentials might fail to do so if you run them manually.
ECS task metadata endpoint documentation
Find .aws folder in ~/.aws in your machine and move this to Docker container's /root folder.
.aws contains files which has AWS KEY and AWS PW.
You can easily copy it to currently running container from your local machine by
docker cp ~/.aws <containder_id>:/root

Error SSHing to Elastic MapReduce JobFlow on AWS

When following the tutorial instructions for connecting to my JobFlow in EMR, I type following:
./elastic-mapreduce --jobflow j-3FLVMX9CYE5L6 --ssh
and get this error:
Permission denied (publickey)
I'm already able to run other elastic-mapreduce commands just fine to create flows etc, so I'm assuming there's security settings required on the actual master instance for the flow, but nothing in the tutorial explains how to configure this (after all, I need to SSH into it to do the configuration in the first place!)
I found that I need to login as user "hadoop" using the EC2 keypair, and not any of the regular suspects (ec2-user, root, etc.) Like:
ssh -i privatekey.pem hadoop#masternode
Hope this is useful to someone.
Ok now I feel sheepish: I was using the Amazon CloudFront keypair from the my initial account setup rather than keypair associated with my account for accessing EC2 instances, accessible from EC2 > Network & Security > Key Pairs in the AWS Management Console.
The command "ssh -i privatekey.pem hadoop#masternode" worked great. The user "hadoop" must be used for "ec2 elastic mapreduce".