How to find a user on Amazon AWS console - amazon-web-services

There is a user that can log in via FTP on a setup I’m working with. I can’t get a hold of the people who set it up, but it is with Amazon. I wanted to find out how I could see what permissions this FTP user has and how to set up another one for third-party access. I think it may be setup with EC2, but I’m not sure.

The FTP server, if it is running on EC2, has no relationship with your AWS console - it is specific to your instance, and whatever ftp software is running on the server.
You will need to get access to the instance to find out any more information. You can see the key associated with the instance from the console. If you don't have access to that instance, there are ways to get access, but it will involve stopping the instance, mounting the volume to another instance, adding a new key to volume, and then restarting and using that key key to access it.

Related

how to recover lost AWS .pem file and putty key, which are lost due to any virus

Yesterday I downloaded Filezilla, after the downloading, I got warn message from my computer, and when I checked the download folder, all data were deleted including putty key and .PPM file. could anyone explain me please, how can I recover these files?
Once you download an AWS pem you can never redownload it again (this is for security purposes if your account was compromised).
Best practice would be store anything of value in an external storage, rather than on a single users machine.
Unfortunately as it stands the instances will not be connectable over SSH without having a PEM. This isn't to say you have lost access to these instances however.
If the individual host is not of importance or can be recreated very easily, you could simply create a new SSH key within AWS and launch new instances using this configuration. You can always create an AMI of the current instances to launch ew one that is identical but specify your new SSH key when you launch.
If the hosts are important AWS support to allow you access the host via a terminal. Before accessing generate a new private/public key and then add the public key to the hosts .ssh/authorized_keys file once you have gained access.
The simplest solution would be to use Sessions Manager to allow you to access the host either via the console or the CLI.
For sessions manager the instances IAM role will need to grant permissions as well as the agent being previously installed.

How to add some new code to an existing EC2 instance

Bear with me, what I am requesting may be impossible. I am a AWS noob.
So I am going to describe to you the situation I am in...
I am doing a freelance gig and was essentially handed the keys to AWS. That is, I was handed the root user login credentials for the AWS account that powers this website.
Now there are 3 EC2 instances. One of the instances is a linux box that, from what I am being told, is running a Django Python backend.
My new "service" if you will must exist within this instance.
How do I introduce new source code into this instance? Is there a way to pull down the existing source code that lives within it?
I am not be helped by any existing/previous developers so I am kind of just handed the AWS credentials and have no idea where to start.
Is this even possible. That is, is it possible to pull the source code from an EC2 instance and/or modify the code? How do I do this?
EC2 instances are just virtual machines. So you can use SSH/SCP/SFTP files to and from. You can use the AWS CLI tools to copy stuff from S3. Dealers choice...
Now to get into this instance... If you look in the web console you can find its IP(s), what the security groups (firewall rules), and the key pair name. Hopefully they gave you the keys. You need these to SSH in.
You'll also want to check to make sure there's a security group applied that has SSH open. Hopefully only to your IP :)
If you don't have the keys you'll have to create an AMI image of the instance so you can create a new one with a key pair you do have.
Amazon has a set of tools for you in Amazon CodeSuite.
The tool used for "deploying" the code is Amazon CodeDeploy. By using this service you install an agent onto your host, then when triggered it will pull down an artifact of a code base and install it matching hosts. You can even specify additional commands through the hook system.
But you also want to trigger this to happen, maybe even automatically? CodeDeploy can be orchestrated using the CodePipeline tool.

How can I able to use PEM file in aws again?

Currently, I am facing an issue related to AWS. A project is already uploaded on AWS server and always connect to that Project by using PEM key of that particular project. But from last week I am not able to connect with the AWS server by using the PEM key. I got one solution for this, as by creating new instance i will be able to got my access on the project but this will make me lose of my whole data & database as a result. Is this occurred due to virus or any thing else? Got stuck over here badly. Any help will be appreciable.
I think the authorized_keys file has some incorrect entries. Let's remove the key from the file and add it again.
remove the host key
ssh-keygen -R [hostname]
ssh again
ssh -Tv ec2-user#example.com -i ~/mykey.pem
In order to use existing EBS volume with data in a new ec2, with a new ssh key:
Create a snapshot of the current instance's EBS, and create a volume.
Create a new instance with a new ssh key.
Stop the new instance and attached the previously created volume as the boot volume.
Start the new instance and you should be able to login with the new ssh key.
The log indicates that you do not have any networking issues. It is the ssh server on the instance that is rejecting your connection.
The first thing to confirm is that you are connecting to the correct Amazon EC2 instance. If an EC2 instance is stopped and started again, it might change public IP address (depending how it is configured). Therefore, make sure that you are connecting to the right instance.
Next, confirm that you are using the correct username. You are using ubuntu#, which is correct if the instance is using an Ubuntu AMI. However, it is possible to create additional users on a Linux computer and the PEM files are associated with specific users. Therefore, confirm that this is the correct username for use with that PEM file.
Next, confirm that you are using the correct PEM file. The PEM file should contain the private half of a keypair that matches the public half that is stored on the instance in the user's ~.ssh/authorized_keys file. The log indicates that the instance is rejecting the provided keypair. Therefore, you might be using the wrong one.
Failing all this, there are some recommended steps available on: Walkthrough: Reset Passwords and SSH Keys on Amazon EC2 Instances - AWS Systems Manager
However, they might not work since you are using an Ubuntu instance, which might not have standard AWS software installed.
Let us know how you go!

Using AWS S3 as an alternative to SFTP

Folks,
I've setup an SFTP server on an EC2 instance to receive files from remote customers that need to send 3 files each, several times throughout the day (each customer connects multiple times a day, each time transferring the 3 files which keep their names but change their contents). This works fine if the number of customers connecting simultaneously is kept under control, however I cannot control exactly when each customer will connect (they have automated the connection process at their end). I am anticipating that I may reach a bottleneck in case too many people try to upload files at the same time, and have been looking for alternatives to the whole process ("distributed file transfer" of some sort). That's when I stumbled upon AWS S3, which is distributed by definition, and was wondering if I could do something like the following:
Create a bucket called "incoming-files"
Create several folders inside this bucket, one for each customer
Setup a file transfer mechanism (I believe I'd have to use S3's SDK somehow)
Provide a client application for each customer, so that they can run it at their side to upload the files to their specific folders inside the bucket
This last point is easy on SFTP, since you can set a "root" folder for each user so that when the user connects to the server it automatically lands on its appropriate folder. Not sure if something of this sort can be worked out on S3. Also the file transfer mechanism would have to not only provide credentials to access the bucket, but also "sub-credentials" to access the folder.
I have been digging into S3 but couldn't quite figure out if this whole idea is (a) feasible and (b) practical. The other limitation with my original SFTP solution is that by definition an SFTP server is a single point of failure, which I'd be glad to avoid. I'd be thrilled if someone could shed some light on this (btw, other solutions are also welcomed).
Note that I am trying to eliminate the SFTP server altogether, and not mount an S3 bucket as the "root folder" for the SFTP server.
Thank you
You can create an S3 policy that will grant access only to certain prefix ("folder" in your plan). The only thing your customers need is permission to do PUT request. For each customer you will also need to create a set of access keys.
It seems you're overcomplicating. If SFTP is a bottleneck and is not redundant, you can always create a scale group (with ELB or DNS round-robin in front of it) and mount S3 to EC2 instances with sshfs or goofys. If cost is not an issue here, you can even mount EFS as NFS share.
AWS has an example configuration here that seems like it may meet your needs pretty well.
I think you're definitely right to consider s3 over a traditional SFTP setup. If you do go with a server-based approach, I agree with Sergey's answer -- an auto-scaling group of servers backed by shared EFS storage. You will, of course, have to own maintenance of those servers, which may or may not be an issue depending on your expertise and desire to do so.
A pure s3 solution, however, will almost certainly be cheaper and require less maintenance in the long-run.
There is now an AWS managed SFTP service in the AWS Transfer family.
https://aws.amazon.com/blogs/aws/new-aws-transfer-for-sftp-fully-managed-sftp-service-for-amazon-s3/
Today we are launching AWS Transfer for SFTP, a fully-managed, highly-available SFTP service. You simply create a server, set up user accounts, and associate the server with one or more Amazon Simple Storage Service (S3) buckets. You have fine-grained control over user identity, permissions, and keys. You can create users within Transfer for SFTP, or you can make use of an existing identity provider. You can also use IAM policies to control the level of access granted to each user. You can also make use of your existing DNS name and SSH public keys, making it easy for you to migrate to Transfer for SFTP. Your customers and your partners will continue to connect and to make transfers as usual, with no changes to their existing workflows.

SSH from AWS BeanStalk

I have an application that runs on AWS BeanStalk and one requirement is to connect to another server using ssh. I could log as root into the server and generate a key pair that i can use but this would not scale. (we have auto-scaling enabled)
Is there a way to generate and replicate a key pair across the instances that are running?
Edit - I feel the need to provide a better description to my problem.
When I lunch the BeanStalk instance i selected the previously created keypair but looking at the EC2 documentation here it states the following:
Amazon EC2 stores the public key only, and you store the private key.
This seems to work ok as I am able to ssh into the ec2 instance. We have another service that is running on a DigitalOcean hosted machine, and we need to ssh from the ec2 instance to the digitalocean instance.
Important The DigitalOcean instance can only allow key based authentication (user/password authentication is not allowed)
When i log into the ec2 machine i can see that in the .sshfolder i only have the authorized_keys file and that would make sense taking into consideration the documentation paragraph.
Is there a way to get a public key that i could use to log into the digitalocean instance from the ec2 instance?
If I understand you correctly, you need the Beanstalk application to SSH in to another server?
Every EC2 instance gets launched with a designated keypair. You have the option of either creating a new keypair or using a keypair already set (i.e. the keypair created by the Beanstalk application for the first instance).
Keeping the private key on the Beanstalk instance, launching the other instance(s) using that same keypair would allow the application to use the private key to SSH in and also allow you to scale the instances without your having to go in to each one and create new keypairs.
That said, I believe the documentation suggests against keeping the private key on the instance, so perhaps consider launching the non-Beanstalk instances with a configuration script that creates a customized user, perhaps using a key and password and pre-configuring the application with that information? You can keep that information as environment variables within Beanstalk itself, similar to how you would keep RDS credentials.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
When you launch an instance in Amazon EC2, you have the option of
passing user data to the instance that can be used to perform common
automated configuration tasks and even run scripts after the instance
starts. You can pass two types of user data to Amazon EC2: shell
scripts and cloud-init directives. You can also pass this data into
the launch wizard as plain text, as a file (this is useful for
launching instances via the command line tools), or as base64-encoded
text (for API calls).
EDIT 1
In order to SSH from computer A to computer B, computer A needs to have the private key in the .ssh directory and computer B needs to have the public key appended to the authorized_keys file in the .ssh directory, so that's perhaps why you don't see either key in the Elastic Beanstalk EC2 instance.
Since you have the public key within the authorized_keys file, you can simply replicate it to the DigitalOcean instance (once it's on the server, do cat public_key >> authorized_hosts) and since you're able to SSH in to the Elastic Beanstalk instance, you can simply take the private key from your computer and put it in the .ssh directory of the Beanstalk instance. That way, now the DigitalOcean instance will have just the public key appended to the authorized_keys file and the beanstalk instance will now have both the private key and public key as authorized to login.
That said, this is probably the most insecure way of doing this...I would prefer you generate a new key and use that to be able to SSH from the Beanstalk instance to the DigitalOcean instance.
Note, this is not the same as creating a new IAM user, though you can use IAM to simply create new key pairs.
EDIT 2
I guess it will be difficult for an EC2 instance to automatically obtain the private key upon being automatically launched, so the way I see it, you have three options;
1) EC2 instances can be (auto)launched with a custom "user data" script, which I referenced above. In that script, you can include the actual private key data (pretty bad idea IMO) OR have it obtain the private key from somewhere (e.g. SCP with username/password into some machine and download it). Again, all pretty bad ideas.
2) Embed the private key within your Beanstalk application. Not knowing what language your application is written in, it's difficult to determine how bad / good of an idea this is. If it's in Java, private / sensitive keys get embedded all the time, so I don't see why this would be any different. This seems to me to be a fine idea, iff this is an application developed specifically for this use case and will never be used anywhere else. I'd hate to see a developer accidentally deploy this app somewhere else and now that private key is potentially compromised.
3) You could create an AMI of the EC2 instance with the key embedded in it. Then simply instruct the autoscaling to launch a new instance of that AMI and voila, you will have the key in the .ssh directory. I tend to like this idea the best as it uses AWS resources for what they're intended, and I would think makes the key a bit more 'secure' (outside of compromising the EC2 instance itself, it will be much more difficult for anyone to access the key). This wouldn't add any additional scalability over option #2 as you can scale / deploy a Beanstalk application just as much as you can an AMI image. That preference is up to you.
NOTE, this of course says nothing about scaling the DO machine, assuming that's even a requirement.