How to add aws ec2 at rundeck? - amazon-web-services

I added a new aws-ec2 source at rundeck and also inserted a rundeck-ec2 plugin at the libext folder. Everything is working but this new aws-ec2 node is not showing show at nodes.
Any idea what is happening and how to solve it?

I leave the steps to use the ec2-plugin with Rundeck, in this guide I used Amazon Linux 2 as EC2 instance.
Download the plugin from here, and put the .jar file on Rundeck libext directory.
Check that the plugin is ready, go to the Gear icon (up to right) > Plugins > Installed Plugins, if you use the textbox to find "ec2" you can see that the plugin is well installed on your Rundeck instance.
Now it's time to add your nodes, go to Project Settings (Rundeck sidebar) > Edit Nodes and click on "Add a new Node Source +" button. Then select "AWS EC2 Resources" from list.
So, you need to pass some parameters to connect to your EC2 nodes. Focus on the "Access Key", "Secret Key" and "Endpoint" textboxes.
You can get the first and second one going to your AWS profile link (Up to right on AWS) > "My Security Credentials" > and click on "Create access key" button, that generates the Access Key ID and Secret Key (also, you can use some Access Key ID/Secret access key created before).
The third one ("Endpoint"), it's an HTTPS URL with your instance Endpoint, for example, if your EC2 nodes are in US East (N. Virginia) Zone, you need to put https://ec2.us-east-2.amazonaws.com, you can see all endpoint codes here.
If you click on "Nodes" (Rundeck sidebar) you can see your EC2 nodes listed.
Now, you need to access them, for that you have two methods.
5a. Using the AWS .pem file: Click on the Gear Icon (up to right) > Key Storage and add a new Private key with the .pem file content, give it a name, and save it. Now go to Edit Configuration (Rundeck Sidebar) > Default Node Executor tab and reference your Key Storage entry on "SSH key Storage Path" textbox, save it.
5b. Like any SSH remote node: Just add the rundeck user (or the user that launches Rundeck) public key (id_rsa.pub file) content on the autorized_keys (/home/ec2-user/.ssh/autorized_keys), and now the EC2 node trusts the Rundeck instance.
Run any command against your nodes on the Rundeck's Commands page.
You can see the full documentation here and here a video about Rundeck and EC2 usage.

Related

Rundeck AWS node plugin user issue

I am using the Rundeck Amazon EC2 node Source to populate my node list. The node list is created successfully, but some of the machines use ubuntu user and the rest use ec2-user, how can I add all the nodes in the same project?
Can Rundeck change the user to ec2-user and retry if the connection fails with ubuntu user.
The easiest way is to add the AWS .pem key in the key storage and then reference it at the project level: Project Settings > Edit Configuration > Default Node Executor tab (SSH Key Storage Path section) and save. In that way, you can access all nodes directly.
Alternatively, you can add the rundeck user public key to the /home/user/.ssh/authorized_keys file on each remote SSH account (or the user that launches Rundeck if you're using a WAR-based installation).

How are ~/.ssh/authorized_keys and google cloud project/instance level metadata related?

From what I read, ~/.ssh/authorized_keys just take what is in the project level metadata. If i delete an entry from the console, it disappears from the authorized_keys file too.
However, if i delete from the authorized_keys file, the console still shows the entry with the deleted public key.
Once I exit the ssh session, I'm then not able to go back in with either gcloud compute ssh user#instance_name or ssh user#instance_ip. Why is this so?
I had to then go to the console and delete that entry (that i previously removed by editting authorized_keys file directly) and only now gcloud compute ssh user#instance_name works properly again, helping me add my google_compute_engine.pub into project metadata so ssh user#instance_ip now works too.
P.S I'm unfamiliar with how instance level metadata works so only experimented with project level metadata ssh keys. If any answer can comment whether your answer applies to instance level too that'll be great.
One can provide SSH keys either on the project or instance level. Please don't edit files, but add them on the GCE console, becaause they're generally managed by GCP. So that one can even eg. generate and provision a new key, run a script, let the key expire. Alike this one does not have to store the key anywhere ...which is quite unlike a traditional VM. The amount of keys one can add into instance meta-data is limited.

How to access EC2 Instance even if PEM file is lost

I lost the PEM key to the EC2 Instance.
I followed all the following steps:
HOW TO ACCESS EC2 INSTANCE EVEN IF PEM FILE IS LOST
Accessing the EC2 instance even if you loose the pem file is rather easy.
First, create a new instance by creating new access file, call it 'helper' instance with same region and VPC as of the lost pem file instance.
Now stop the lost pem file instance. Remember not to terminate instance but to stop it.
Go to EBS volumes, select the root volume of the lost pem file instance and detach.
Now again select the detached volume and this time you have to attach this volume to helper instance which we created before. Since helper instance already has a root volume by default as /dev/sda1, the newly attached volume will be secondary(eg: /dev/sdf).
Login to your helper instance with its pem file.
Execute below commands:
# mount /dev/xvdf1 /mnt
# cp /root/.ssh/authorized_keys /mnt/root/.ssh/
# umount /mnt
Detach the secondary volume from helper instance.
Again attach the volume back to our recovery instance. Start the instance. Terminate the helper instance.
Use helper instance pem file to log into recovery instance.
Great to see your answers. Just for the information AWS has shared their official tutorial also for the same hence sharing the same here: https://youtu.be/F8jXE-_hdfg
With this video we can found, AWS support has been getting this same questions from the users and hence made this stuff with detailed structure.
This is with step by step details. Hope this helps.
A few weeks ago AWS announced SSM Session Manager. This allows you to access (login) to your EC2 instances without requiring a key pair, password, open ports, etc. Both Windows and Linux are supported.
The latest AMIs do not have the latest version of the SSM agent. You will need to update that first, which you can also do via the SSM Console or via AWS CLI.
AWS Systems Manager Session Manager
Once you connect to your system, you can then correct any problems that you have. For example, you could create a new keypair in the AWS Console and then copy the public key to `~/.ssh/authorized_keys so that you can once again access your system via SSH.
For Windows systems, you can even change the Administrator password if it has been forgotten. This can be a lifesaver.
In my case auto-scaling group was enabled so it became easy to attach instance to new Key Pair, Here are the steps that I followed
Created new Key pair under EC2 Dashboard -> Key Pairs (download the .pem file in this step)
Go to Auto Scaling -> Launch Configurations
Select required Launch Configuration and then copy launch configuration
Here while reviewing launch configuration you can create a new key pair or you can select the existing key pair that is created at step 1
Once new launch configuration is created go to the auto-scaling group
Select the auto-scaling group then select new launch configuration from the dropdown
Once this is done if you stop the auto-scaling group instance it will create a new one with the new launch configuration (with new key pair)
List item
here are the steps to access EC2 instance on the fly after loss of key pair
Create new instance in same region with new key pair and name it as TEST
now connect to the new instance and copy the data from authorized_keys from .ssh directory (/.ssh/authorized_keys)
go to the security group of lost pem file instance and allow ssh for EC2 instance connect
(please check the ip range for specific region by command curl -s https://ip-ranges.amazonaws.com/ip-ranges.json| jq -r '.prefixes[] | select(.region=="us-east-1") | select(.service=="EC2_INSTANCE_CONNECT") | .ip_prefix')
Once you done with the security group changes connect to the lost file instance by EC2 instance connect
Now open .ssh/authorized_keys and replace it by TEST instance authorized_keys
you can now access your lost key file instance by new key pair
terminate TEST instance and do changes in security group.
Take a note that this solution might expose port 22 of your instance for while.
Thank you.

Amazon AWS EMR "no" configuration sample application

I registered for aws account yesterday and today i followed couple of videos on youtube to run a sample wordcount on input file in S3.
I tried to do that but i don't see any "configuration sample application" button. I have attached an image. It may be trivial, since i am new i may be missing something.
Process i followed:
Created a bucket in S3
aws-> security credentials, created an access
key aws->EC2, created key pair
AWS->IAM, created new role as EC2 + administrator
AWS->EMR, create cluster
Here i don't see any option for configure sample application button. Please check image for more detail
Amazon EMR used to have a 'Sample Application' button, like this:
However, that button is no longer available in the Amazon EMR interface.
The tutorial is most probably out-of-date. (Things change fast on AWS!)

How to limit access to a shared ami

I need to share an ami so that it can be used by a client to create their own instances through their own account. However, I do not wish that client to be able to ssh in to the instance. I will need to be able to ssh into the instance to be able to maintain it. They will have ftp and www access only. I've got the ftp and www access part working through ssh configuration. How do I keep them out of ssh when they are starting up the instance with their own keypairs?
Well, you can't really prevent this, if they are determined to get in, since they control the instance.
Stop instance → unmount root EBS volume → mount elsewhere → modify contents → unmount → remount → restart → pwn3d.
However, according to the documentation, if you don't configure the AMI to load their public key, it just sits there and doesn't actually work for them.
Amazon EC2 allows users to specify a public-private key pair name when launching an instance. When a valid key pair name is provided to the RunInstances API call (or through the command line API tools), the public key (the portion of the key pair that Amazon EC2 retains on the server after a call to CreateKeyPair or ImportKeyPair) is made available to the instance through an HTTP query against the instance metadata.
To log in through SSH, your AMI must retrieve the key value at boot and append it to /root/.ssh/authorized_keys (or the equivalent for any other user account on the AMI). Users can launch instances of your AMI with a key pair and log in without requiring a root password.
If you don't fetch the key and append it, as described, it doesn't appear that their key, just by virtue of being "assigned" to the instance, will actually give them any access to the instance.
I was able to finally accomplish this crazy task by using this process:
1) Login as ubuntu
2) create a user, belongs to sudo and admin group
3) install all my s/w under the newuser
4) verify that new user has all required privileges
5) chroot jail the ubuntu user to ftp access only
when the ami is transmitted to the new zone/account, the ubuntu user exists as a sudoer, but cannot ssh into the instance. Ftp allows them to connect to the system, but they view a bare directory and cannot cd anywhere else in the system.
It's not a complete denial, but I think it will serve the purpose for this client.