Unable to SSH into AWS EC2 instance with instance metadata turned off - amazon-web-services

I am not able to SSH into a EC2 instance if it is launched with the instance metadata service is turned off.
ec2.runInstances({ ... MetadataOptions: {
HttpEndpoint: 'disabled'..
})
This however is not an issue if I launch with the MetadataOptions enabled and disable it with a modify-instance-metadata-options call after the instance has finished starting up. Is this documented behaviour? I couldn't find it explicitly mentioned in the documentation anywhere.
Note - this is not a security group, Network ACL, etc issue.

I noticed this too. It seems that disabling IMDS breaks all of the following:
SSH access is broken; the authorized_keys file for the default user (ie root or ubuntu) is not populated because the EC2 Key Pair is normally provided in instance metadata.
Cloud-init/Cloud-config (aka "userdata") do not run. The user data is normally made available at http://instance-data.:8773 but this is unavailable when IMDS is disabled.
Therefore, if your desire is to disable IMDS from the moment of launch, it seems the only viable workaround is to create your own AMI that has your own configuration (ie. SSH authorized_keys) backed into it. Packer is commonly used for building AMIs in this way.
An alternate approach would be to give the EC2 instance profile permission to call ModifyInstanceMetadataOptions conditionally scoped the instance can only affect itself, then call aws ec2 modify-instance-metadata-options --http-endpoint disabled at the end of the setup script. Once an instance locks itself out this way, it cannot unlock itself because it'll no longer have access to STS tokens via the IMDS endpoint.

Related

Successful output via the Ansible ec2 module for starting an instance, but it doesn't change the state?

We're using the Ansible ec2 module to start instances in an external teams environment, and the module succeeds from the Ansible end but looking at the console it does not register any changes. We are able to successfully stop instances, but not start instances. I've tried to mimic the behavior using the AWSCLI from our AWS bastion host, and that produces the same error where the VM goes into a 'Pending' state, but does not start. We've configured the AWSCLI to use an IAM Profile for authentication vs AWS keys due to constraints from our client, we use this same configuration for our AWS Ansible modules. The profile seems like it has all the necessary permissions based on the AWS documentation. Currently it has the 'AmazonEC2FullAccess' policy as the only policy associated with our role. I would think that would be enough, if we're able to stop instances successfully via those policies, why wouldn't we be able to start? Unless there may be another permission needed? What are we doing incorrectly?

IMDS in ECS Task without Exposing Instance Role Credentials

In our ECS cluster running with EC2 instances we would like our tasks to be able to access the metadata server (169.254.169.254) without exposing the instance role credentials available through metadata on the task:
http://169.254.169.254/latest/meta-data/iam/security-credentials/ <INSTANCE_ROLE_NAME>
I am aware of IMDSv2 but I am not sure how this could solve our problem in this specific case. Another solution could be to simply disable IMDS within tasks but we need to obtain the EC2 instance within it.
Would there be a workaround / solution to our problem that would allow us to benefit from the metadata without exposing the instance role credentials ?

how to recover lost AWS .pem file and putty key, which are lost due to any virus

Yesterday I downloaded Filezilla, after the downloading, I got warn message from my computer, and when I checked the download folder, all data were deleted including putty key and .PPM file. could anyone explain me please, how can I recover these files?
Once you download an AWS pem you can never redownload it again (this is for security purposes if your account was compromised).
Best practice would be store anything of value in an external storage, rather than on a single users machine.
Unfortunately as it stands the instances will not be connectable over SSH without having a PEM. This isn't to say you have lost access to these instances however.
If the individual host is not of importance or can be recreated very easily, you could simply create a new SSH key within AWS and launch new instances using this configuration. You can always create an AMI of the current instances to launch ew one that is identical but specify your new SSH key when you launch.
If the hosts are important AWS support to allow you access the host via a terminal. Before accessing generate a new private/public key and then add the public key to the hosts .ssh/authorized_keys file once you have gained access.
The simplest solution would be to use Sessions Manager to allow you to access the host either via the console or the CLI.
For sessions manager the instances IAM role will need to grant permissions as well as the agent being previously installed.

Update terraform resource after provisioning

so I recently asked a question about how to provision instances that depend on each other. The answer I got was that I could instantiate the 3 instances, then have a null resource with a remote-exec provisioner that would update each instances.
It works great, except that in order to work my instances need to be configured to allow ssh. And since they are in a private subnet, I first need to allow ssh in a public instance that will then bootstrap my 3 instances. This bootstrap operation requires allowing ssh on 4 instances that really don't need to once the bootstrap is complete. This is not that bad, as I can still restrict the traffic to known ip/subnet, but I still thought it was worth asking if there was some ways to avoid that problem.
Can I update the security group of running instances in a single terraform plan? Example: Instantiate 3 instances with security_group X, provision them through ssh, then update the instances with security_group Y, thus disallowing ssh. If so, how? If not, are there any other solutions to this problem?
Thanks.
Based on the comments.
Instead of ssh, you could use AWS Systems Manager Run Command:
AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale.
This would require making your instances to be recognized by AWS Systems Manager (SSM) which requires three things:
network connectivity to SSM service. Since your instances are in private subnet, they either have to connect to the SSM service using NAT gateway or VPC interface endpoints for SSM.
SSM Agent installed and running. This is usually not an issue as most offical AMI on AWS already have it setup.
Instance role with AmazonSSMManagedInstanceCore AWS managed policy.
Since run-command is not supported by terraform, you either have to use local-exec to run the command through AWS CLI, or through lambda function using aws_lambda_invocation.

SSH from AWS BeanStalk

I have an application that runs on AWS BeanStalk and one requirement is to connect to another server using ssh. I could log as root into the server and generate a key pair that i can use but this would not scale. (we have auto-scaling enabled)
Is there a way to generate and replicate a key pair across the instances that are running?
Edit - I feel the need to provide a better description to my problem.
When I lunch the BeanStalk instance i selected the previously created keypair but looking at the EC2 documentation here it states the following:
Amazon EC2 stores the public key only, and you store the private key.
This seems to work ok as I am able to ssh into the ec2 instance. We have another service that is running on a DigitalOcean hosted machine, and we need to ssh from the ec2 instance to the digitalocean instance.
Important The DigitalOcean instance can only allow key based authentication (user/password authentication is not allowed)
When i log into the ec2 machine i can see that in the .sshfolder i only have the authorized_keys file and that would make sense taking into consideration the documentation paragraph.
Is there a way to get a public key that i could use to log into the digitalocean instance from the ec2 instance?
If I understand you correctly, you need the Beanstalk application to SSH in to another server?
Every EC2 instance gets launched with a designated keypair. You have the option of either creating a new keypair or using a keypair already set (i.e. the keypair created by the Beanstalk application for the first instance).
Keeping the private key on the Beanstalk instance, launching the other instance(s) using that same keypair would allow the application to use the private key to SSH in and also allow you to scale the instances without your having to go in to each one and create new keypairs.
That said, I believe the documentation suggests against keeping the private key on the instance, so perhaps consider launching the non-Beanstalk instances with a configuration script that creates a customized user, perhaps using a key and password and pre-configuring the application with that information? You can keep that information as environment variables within Beanstalk itself, similar to how you would keep RDS credentials.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
When you launch an instance in Amazon EC2, you have the option of
passing user data to the instance that can be used to perform common
automated configuration tasks and even run scripts after the instance
starts. You can pass two types of user data to Amazon EC2: shell
scripts and cloud-init directives. You can also pass this data into
the launch wizard as plain text, as a file (this is useful for
launching instances via the command line tools), or as base64-encoded
text (for API calls).
EDIT 1
In order to SSH from computer A to computer B, computer A needs to have the private key in the .ssh directory and computer B needs to have the public key appended to the authorized_keys file in the .ssh directory, so that's perhaps why you don't see either key in the Elastic Beanstalk EC2 instance.
Since you have the public key within the authorized_keys file, you can simply replicate it to the DigitalOcean instance (once it's on the server, do cat public_key >> authorized_hosts) and since you're able to SSH in to the Elastic Beanstalk instance, you can simply take the private key from your computer and put it in the .ssh directory of the Beanstalk instance. That way, now the DigitalOcean instance will have just the public key appended to the authorized_keys file and the beanstalk instance will now have both the private key and public key as authorized to login.
That said, this is probably the most insecure way of doing this...I would prefer you generate a new key and use that to be able to SSH from the Beanstalk instance to the DigitalOcean instance.
Note, this is not the same as creating a new IAM user, though you can use IAM to simply create new key pairs.
EDIT 2
I guess it will be difficult for an EC2 instance to automatically obtain the private key upon being automatically launched, so the way I see it, you have three options;
1) EC2 instances can be (auto)launched with a custom "user data" script, which I referenced above. In that script, you can include the actual private key data (pretty bad idea IMO) OR have it obtain the private key from somewhere (e.g. SCP with username/password into some machine and download it). Again, all pretty bad ideas.
2) Embed the private key within your Beanstalk application. Not knowing what language your application is written in, it's difficult to determine how bad / good of an idea this is. If it's in Java, private / sensitive keys get embedded all the time, so I don't see why this would be any different. This seems to me to be a fine idea, iff this is an application developed specifically for this use case and will never be used anywhere else. I'd hate to see a developer accidentally deploy this app somewhere else and now that private key is potentially compromised.
3) You could create an AMI of the EC2 instance with the key embedded in it. Then simply instruct the autoscaling to launch a new instance of that AMI and voila, you will have the key in the .ssh directory. I tend to like this idea the best as it uses AWS resources for what they're intended, and I would think makes the key a bit more 'secure' (outside of compromising the EC2 instance itself, it will be much more difficult for anyone to access the key). This wouldn't add any additional scalability over option #2 as you can scale / deploy a Beanstalk application just as much as you can an AMI image. That preference is up to you.
NOTE, this of course says nothing about scaling the DO machine, assuming that's even a requirement.