I would like to share a GitHub project ssh key pair with all new instances that I create so that it's possible to git clone and launch the program from the user data file without having to ssh in the instance.
Quite easy to do on GCP but not quite sure how to do any of that in AWS ec2 instances.
Edit: In GCP I would simply use the "Secret manager" which is shared between instances.
Since you mention that you'd use Secret Manager in a Google Cloud, it seems reasonable to suggest the AWS Secrets Manager service.
Set your private key as a Secret, and grant access to it with an IAM role attached to the EC2 instance. Then install the AWS CLI package before building the AMI, and you can use it to fetch the secret on first boot with a User Data script.
Because I find the AWS secret manager hard to use and expensive compared to GCP here's the solution I ended up using.
this is my user data file that is passed to the instance on creation.
sudo mkdir ~/.ssh
sudo touch ~/.ssh/id_rsa
sudo echo "-----BEGIN OPENSSH PRIVATE KEY-----
My GitHub private key" >> ~/.ssh/id_rsa
sudo chmod 700 ~/.ssh/
sudo chmod 600 ~/.ssh/id_rsa
git clone https://wwww.github.com/your-repo
# other commands goes here
Note that it will add this to the root user.
not the cleanest solution but it works well
edit: sudo shouldn't be required because it all runs as root
Related
I have an Ubuntu instance in AWS EC2.
In .ssh folder in the authorized_keys file I see that I have the key_name which was generated in AWS.
I took this public key and added it on gitlab & github accounts under SSH preferences.
When I try to clone my repo with ssh I still get permission denied.
git clone git#gitlab.com:[username]/[project].git
What else am I missing?
GitHub now has a new form of account authentication, and when performing Git Clone of private projects, you will need to pass the SSH key previously registered in your account's PASSWORD, instead of your own password.
In the link below you can understand better and also learn how to register your SSH key for authentication:
https://docs.github.com/en/enterprise-server#3.0/github/authenticating-to-github/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account
I will try to explain better what my problem was and how I fixed it.
I created a Linux instance in AWS EC2 and also generated my private key_name with which I could ssh into the instance.
Inside the instance, in the .ssh folder, there is a file name authorized_keys which holds the public key.
I thought I could take this key and add it to my gitlab/github accounts, but this didn't work. (perhaps I still lack some basic understanding of ssh...)
What worked was generating a new key pair inside the EC2 Linux instance and place that public key in gitlab/github.
Thanks.
when I try to clone my private repository from bitbucket to my ec2 instance using ssm agent I get
Permission denied (publickey).
fatal: Could not read from remote repository.
after invistagation I found that ssm command for some resone cant see any of my public keys in ~/.ssh/ , how to git clone from bitbucket using ssm ?
The problem was that SSM or any command like startup commands when machine run as root and what I found that root doesn't have permissions to read or any access to ssh public and private keys.
So my solution "I think it is work around but work for me"
Change current user to root sudo su
Go to shh directory cd .ssh
copy key files or regenerate ssh keys here
I am trying to build a Docker image and I need to copy some files from S3 to the image.
Inside the Dockerfile I am using:
Dockerfile
FROM library/ubuntu:16.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
# Copy files from S3 inside docker
RUN aws s3 COPY s3://filepath_on_s3 /tmp/
However, aws requires AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
I know I can probably pass them using ARG. But, is it a bad idea to pass them to the image at build time?
How can I achieve this without storing the secret keys in the image?
In my opinion, Roles is the best to delegate S3 permissions to Docker containers.
Create role from IAM -> Roles -> Create Role -> Choose the service that will use this role, select EC2 -> Next -> select s3policies and Role should be created.
Attach Role to running/stopped instance from Actions-> Instance Settings -> Attach/Replace Role
This worked successfully in Dockerfile:
RUN aws s3 cp s3://bucketname/favicons /var/www/html/favicons --recursive
I wanted to build upon #Ankita Dhandha answer.
In the case of Docker you are probably looking to use ECS.
IAM Roles are absolutely the way to go.
When running locally, locally tailored Docker file and mount your AWS CLI ~/.aws directory to the root users ~/.aws directory in the container (this allows it to use your or a custom IAM user's CLI credentials to mock behavior in ECS for local testing).
# local sytem
from ubuntu:latest
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
docker run --mount type=bind,source="~/.aws",target=/root/.aws
Role Types
EC2 Instance Roles define the global actions any instance can preform. An example would be having access to S3 to download ecs.config to /etc/ecs/ecs.config during your custom user-data.sh setup.
Use the ECS Task Definition to define a Task Role and a Task Execution Role.
Task Roles are used for a running container. An example would be a live web app that is moving files in and out of S3.
Task Execution Roles are for deploying the task. An example would be downloading the ECR image and deploying it to ECS, downloading an environment file from S3 and exporting it to the Docker container.
General Role Propagation
In the example of C# SDK there is a list of locations it will look in order to obtain credentials. Not everything behaves like this. But many do so you have to research it for your situation.
reference: https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/creds-assign.html
Plain text credentials fed into either the target system or environment variables.
CLI AWS credentials and a profile set in the AWS_PROFILE environment variable.
Task Execution Role used to deploy the docker task.
The running task will use the Task Role.
When the running task has no permissions for the current action it will attempt to elevate into the EC2 instance role.
Blocking EC2 instance role access
Because of the EC2 instance role commonly needing access for custom system setup such as configuring ECS its often desirable to block your tasks from accessing this role. This is done by blocking the tasks access to the EC2 metadata endpoints which are well known DNS endpoints in any AWS VPC.
reference: https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-ec2-metadata/
AWS VPC Network Mode
# ecs.confg
ECS_AWSVPC_BLOCK_IMDS=true
Bind Network Mode
# ec2-userdata.sh
# install dependencies
yum install -y aws-cli iptables-services
# setup ECS dependencies
aws s3 cp s3://my-bucket/ecs.config /etc/ecs/ecs.config
# setup IPTABLES
iptables --insert FORWARD 1 -i docker+ --destination 169.254.169.254/32 --jump DROP
iptables --append INPUT -i docker+ --destination 127.0.0.1/32 -p tcp --dport 51679 -j ACCEPT
service iptables save
Many people pass in the details through the args, which I see as being fine and the way I would personally do it. I think you can overkill certain processes and this I think this is one of them.
Example docker with args
docker run -e AWS_ACCESS_KEY_ID=123 -e AWS_SECRET_ACCESS_KEY=1234
Saying that I can see why some companies want to hide this away and get this from a private API or something. This is why AWS have created IAM roles - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html.
Details could be retrieved from the private ip address which the S3 can only access meaning you would never have to store your credentials in your image itself.
Personally i think its overkill for what you are trying to do, if someone hacks your image they can console the credentials out and still get access to those details. Passing them in as args is safe as long as you protect yourself as you should anyway.
you should configure your credentials on the ~/.aws/credentials file
~$ cat .aws/credentials
[default]
aws_access_key_id = AAAAAAAAAAAAAAAAAAAAAAAAAAAAa
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBB
I am facing interesting problem. I have launched an ec2 instance which is ubuntu 14.04. I can ssh into it by providing key file like below.
ssh -i "xxxxx.pem" ubuntu#xxxxxxxxxxxx.ap-south-1.compute.amazonaws.com
But I thought of making another account in instance rather than using ubuntu(root) always which is not safe. So I have created another account on my name in server. And for more security I thought of creating private(id_rsa) and public(id_rsa.pub) key files. And put the public key in server .ssh/authorized_keys and I should be able to ssh from my new account from my local machine. Which is also worked. now I can ssh into server like below.
ssh naroju#XXXXXXXXXXXXXXX.ap-south-1.compute.amazonaws.com
Now the problem comes. Although I can ssh into to it from my new account, I cannot ssh into server from my ubuntu(root) account. It gives below error.
Permission denied (publickey).
Why is it giving me this error?
I wonder why did it ssh into server from my new account, does not it require private key file(.pem file of AWS) ?
To create a new user (eg naroju) on the instance, you should create a .ssh/authorized_keys file in the new user's home directory:
$ sudo adduser naroju
$ sudo su - naroju
$ mkdir .ssh
$ chmod 700 .ssh
$ touch .ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
Then, edit the authorized_keys file and add the public key.
You can then login to the new user:
$ ssh -i naroju.pem naroju#IPADDRESS
Since you have modified the public key in the ubuntu user's home directory, you will need to login as ubuntu using the private half of the keypair you selected. You can then run the above commands.
See Amazon EC2 documentation: Managing User Accounts on Your Linux Instance
I was trying to organize how a developer could connect via SSH to an AWS instance that I had launched as an administrator. I notice in the documentation that it says:
you'll need the fully-qualified path of the .pem file for the key pair that you specified when you launched the instance.
Does this mean one can only SSH into an instance that one had launched ? I'd like to just leave the instance running and have others able to SSH in to install software and configure.
Here's how to add new users/developers to an AMAZON EC2 linux instance and give them unique SSH Key access:
Say you are creating "user": Create a key on your own machine by entering the following:
ssh -keygen -b 1024 -f user -t dsa
Don't use a paraphrase -- just hit enter.
You should now have two files compiled: user and user.pub
chmod 600 user.pub
Now transfer the public key file (user.pub) from your computer to the server. For example let us use the /tmp/ directory.
Now SSH into your server using an account with root access, you will now need to create the user and also create the necessary files and ownership for you to use the key you just created:
# sudo su (if needed)
# useradd -c "firstname lastname" user
# cd /home/user
# mkdir .ssh
# chmod 700 .ssh
# chown user:user .ssh
# cat /tmp/user.pub >> .ssh/authorized_keys
# chmod 600 .ssh/authorized_keys
# chown user:user .ssh/authorized_keys
Once you've done this, exit out back to your own machine, then try to SSH using the new credential and user account you've created:
ssh -i user.pem user#ec2-your-instance-name.compute-1.amazonaws.com
When you create an instance, you can specify a key at launch time. What ends up happening is that AWS takes the public key associated with the key pair you created, and puts it into authorized_keys in /home/ec2-user/.ssh/.
So, if this is a one-time thing, you could provide the private key (the .pem file you downloaded when you created the key) to the user that needs access.
If this is an on-going issue - i.e. you will be creating lots of instances and having lots of different people who need to access them - then you need a better solution, but you'll have to figure out what your requirements are.
Some potential solutions would be to get public keys from your users, add them to an instance, and then create an AMI from that instance. Use that AMI to launch future instances. You could also put users public keys into S3, and have a script that pulled them down when the instance was created & either added them to authorized_keys or created individual users. You could pull users keys from IAM if all your users have IAM accounts. Or you could use a directory & configure your instance to use that for authentication.
One way to setup user management is
Ask all the users to create their own public-private key.
Everyone shares/copy/checkin their public key at some location where we can run rsync server to expose all the public keys
On the server, fetch all these keys from rsync server and run a usercreate command.
Another way is, have a base image in AWS with users already created with their public key in authorized_keys. And use this image to create new instances.