AWS EC2: SSH access for new user to existing VMs - amazon-web-services

A new developer joined our team and I need to grant him access to all VMs we have in AWS EC2. After reading a bunch of AWS docs it seems to me that we have two options:
Share the private key used when VMs were spun up with the developer
Have developer generate a new key pair and add his public key to authorized_keys on each VM.
Neither of options is ideal, because #1 violates security practices and #2 requires me to go to make changes to a bunch of VMs.
What's the recommended way to do this?

The question is rather broad, so my answer will be broad.
Yeah, sharing private keys is a bad thing. So I'll skip that and focus on the other portion.
It sounds like you want to centrally manage accounts, rather than manually adding/removing/modifying them on each individual server.
You can set up something like NIS to manage user accounts. This would require changes to every single VM.
If you use something like puppet, chef, or salt you can create recipes to control user access (e.g. pushing out public keys or even creating accounts and configuring sudo).
You can use something like pssh (parallel ssh) to execute commands on multiple hosts at the same time. It could simply add a public key to an existing authorized_keys file or even add a user, its key, and necessary sudo access. (Note: if you do this be very careful. A poorly written command could cut off access for everyone and cause unnecessary down time).
An aside: Having multiple users share a single account is a bad idea, generally a security and QA nightmare. Instead of allowing multiple access to the same account each user should have their own account with the minimal privileged access they need.
Do as you will.

Have you checked out the feature Run Command to execute a simple script to add or remove users.

Related

Permissions to grant for a "sandbox" project?

We're adding a GCP project to be used for greenfield development, e.g. sort of a developer sandbox. My inclination is to give application/service developers full permissions in that project, to reduce friction and let them get stuff done as quickly and easily as possible.
We then have a separate beta project which we use where we prepare work for production, where application/service developers would have limited-to-no access, but the devops team could productionize things. And then, of course, we have the production project, where everything is locked down tight.
Is a sandbox like this a good idea? What permission(s) would I grant? Owner? GCP recommends not using those legacy roles...
List all of what each team is allowed to do on each env.
Translate this to a list of IAM permissions per team per env.
If there is some predefined role/s that matches exactly these permissions then use that role/s
If not, then create your own custom role/s for each team per each env.
For example, in the sandbox env:
if developers team is only allowed to create GKE clusters and deploy workloads to these GKEs then list all required permissions for such operation and find a predefined role that have permissions that only allows this operation. See here.
Or, if this is too wide and does not apply the least privilege concept for you then create your own custom role.
I personally don't recommend to restrict the IAM permission. Indeed, in a sandbox project, you want to try things, and maybe thing totally outside of the box and unexpected as usual way of working/processing. Using IAM to limit the set of allowed product restrict the creativity and protect you against (almost) nothing.
Indeed, if you want to perform security restriction it's for what? Limit the access to the service in Beta environment? Not sure... Prevent the overuse of resources in a non-production (and no profitable) environment? I think yes!
That's why, I recommend to use the Quotas to restrict the number of resources available for a project (i.e. only 10 CPUs in 1 region and not 3600 in 20 regions as by default). Like that, the app team will be able to try and experiment safely, without any restriction, but without killing your budget.

How to add some new code to an existing EC2 instance

Bear with me, what I am requesting may be impossible. I am a AWS noob.
So I am going to describe to you the situation I am in...
I am doing a freelance gig and was essentially handed the keys to AWS. That is, I was handed the root user login credentials for the AWS account that powers this website.
Now there are 3 EC2 instances. One of the instances is a linux box that, from what I am being told, is running a Django Python backend.
My new "service" if you will must exist within this instance.
How do I introduce new source code into this instance? Is there a way to pull down the existing source code that lives within it?
I am not be helped by any existing/previous developers so I am kind of just handed the AWS credentials and have no idea where to start.
Is this even possible. That is, is it possible to pull the source code from an EC2 instance and/or modify the code? How do I do this?
EC2 instances are just virtual machines. So you can use SSH/SCP/SFTP files to and from. You can use the AWS CLI tools to copy stuff from S3. Dealers choice...
Now to get into this instance... If you look in the web console you can find its IP(s), what the security groups (firewall rules), and the key pair name. Hopefully they gave you the keys. You need these to SSH in.
You'll also want to check to make sure there's a security group applied that has SSH open. Hopefully only to your IP :)
If you don't have the keys you'll have to create an AMI image of the instance so you can create a new one with a key pair you do have.
Amazon has a set of tools for you in Amazon CodeSuite.
The tool used for "deploying" the code is Amazon CodeDeploy. By using this service you install an agent onto your host, then when triggered it will pull down an artifact of a code base and install it matching hosts. You can even specify additional commands through the hook system.
But you also want to trigger this to happen, maybe even automatically? CodeDeploy can be orchestrated using the CodePipeline tool.

How to apply multiple SSH keys to an AWS Lightsail Instance

My team has an issue that, when we spin up a new lightsail instance, we are only allowed to apply a single SSH key pair to that instance.
Is there a way to add the key pairs from everyone on my team to some kind of group? And then apply that group to the lightsail instance?
We need everyone on the team to be able to have access to the instance and I cannot find a way to accomplish this. Any insight would be greatly appreciated!
First thing, Private key is not designed to be used by the whole team and its really very bad practice you should not do that and you should not share the EC2 key with everyone.
Is there a way to add the key pairs from everyone on my team to some
kind of group? And then apply that group to the lightsail instance?
You have two option.
Ask for the public key from each developer and your team member, add their keys in ~/.ssh/authorized_keys files. They will be able to ssh against their own key.
This approach will help you to remove user once he has done his job and rotating user keys will be a bit easy.
OpsWork for user and ssh management for EC2 machine or you can explore amazon-ec2-instance-connect-for-ssh
With above approach you will not need to do an ssh and add new team member manually you do this with AWS console. I will prefer this.
I haven't used Lightsail before, but since it's uses EC2 under the hood I am assuming it's pretty much the same.
You can ssh into the machine with they private key provided by lightsail, and then you can add the public keys of the members of your team separated by a new line in this file
~/.ssh/authorized_keys
Then, the people from your team would use something like
ssh ec2-user#public-ip -i /path/to/private/key
If you used an Amazon Linux instance, user is ec2-user, but you used a different instance, the user is different.
If you want to add keys to multiple lightsail instances, I suggest to use a CM tool, like Ansible.

multiuser public jupyter notebook on AWS sagemaker

I know there is a good tutorial on how to create jupyter notebooks on AWS sagemaker "the easy way".
Do you know if it is possible to allow 10 students to create jupyter-notebooks who do not have an AWS accounts, and also allow them to edit jupyter-notebooks?
Enabling multiple users to leverage the same notebook (in this case, without authentication) will involve managing your Security Groups to enable open access. You can filter, allowing access for a known IP address range, if your students are accessing it from a classroom or campus, for example.
Tips for this are available in this answer and this page from the documentation, diving into network configurations for SageMaker hosted notebook instances.
As for enabling students to spin up their own notebooks, I'm not sure if it's possible to enable completely unauthenticated AWS-level resource provisioning -- however once you've spun up a single managed notebook instance yourself, students can create their own notebooks directly from the browser in Jupyter, once they've navigated to the publicly available IP. You may need to attach a new SageMaker IAM role that enables notebook creation (amongst other things, depending on the workload requirements). Depending on the computational needs (number, duration, and types of concurrent workloads), there will be different optimal setups of number of managed instances and instance type to prevent computational bottlenecking.

Using AWS S3 as an alternative to SFTP

Folks,
I've setup an SFTP server on an EC2 instance to receive files from remote customers that need to send 3 files each, several times throughout the day (each customer connects multiple times a day, each time transferring the 3 files which keep their names but change their contents). This works fine if the number of customers connecting simultaneously is kept under control, however I cannot control exactly when each customer will connect (they have automated the connection process at their end). I am anticipating that I may reach a bottleneck in case too many people try to upload files at the same time, and have been looking for alternatives to the whole process ("distributed file transfer" of some sort). That's when I stumbled upon AWS S3, which is distributed by definition, and was wondering if I could do something like the following:
Create a bucket called "incoming-files"
Create several folders inside this bucket, one for each customer
Setup a file transfer mechanism (I believe I'd have to use S3's SDK somehow)
Provide a client application for each customer, so that they can run it at their side to upload the files to their specific folders inside the bucket
This last point is easy on SFTP, since you can set a "root" folder for each user so that when the user connects to the server it automatically lands on its appropriate folder. Not sure if something of this sort can be worked out on S3. Also the file transfer mechanism would have to not only provide credentials to access the bucket, but also "sub-credentials" to access the folder.
I have been digging into S3 but couldn't quite figure out if this whole idea is (a) feasible and (b) practical. The other limitation with my original SFTP solution is that by definition an SFTP server is a single point of failure, which I'd be glad to avoid. I'd be thrilled if someone could shed some light on this (btw, other solutions are also welcomed).
Note that I am trying to eliminate the SFTP server altogether, and not mount an S3 bucket as the "root folder" for the SFTP server.
Thank you
You can create an S3 policy that will grant access only to certain prefix ("folder" in your plan). The only thing your customers need is permission to do PUT request. For each customer you will also need to create a set of access keys.
It seems you're overcomplicating. If SFTP is a bottleneck and is not redundant, you can always create a scale group (with ELB or DNS round-robin in front of it) and mount S3 to EC2 instances with sshfs or goofys. If cost is not an issue here, you can even mount EFS as NFS share.
AWS has an example configuration here that seems like it may meet your needs pretty well.
I think you're definitely right to consider s3 over a traditional SFTP setup. If you do go with a server-based approach, I agree with Sergey's answer -- an auto-scaling group of servers backed by shared EFS storage. You will, of course, have to own maintenance of those servers, which may or may not be an issue depending on your expertise and desire to do so.
A pure s3 solution, however, will almost certainly be cheaper and require less maintenance in the long-run.
There is now an AWS managed SFTP service in the AWS Transfer family.
https://aws.amazon.com/blogs/aws/new-aws-transfer-for-sftp-fully-managed-sftp-service-for-amazon-s3/
Today we are launching AWS Transfer for SFTP, a fully-managed, highly-available SFTP service. You simply create a server, set up user accounts, and associate the server with one or more Amazon Simple Storage Service (S3) buckets. You have fine-grained control over user identity, permissions, and keys. You can create users within Transfer for SFTP, or you can make use of an existing identity provider. You can also use IAM policies to control the level of access granted to each user. You can also make use of your existing DNS name and SSH public keys, making it easy for you to migrate to Transfer for SFTP. Your customers and your partners will continue to connect and to make transfers as usual, with no changes to their existing workflows.