Rundeck AWS node plugin user issue - amazon-web-services

I am using the Rundeck Amazon EC2 node Source to populate my node list. The node list is created successfully, but some of the machines use ubuntu user and the rest use ec2-user, how can I add all the nodes in the same project?
Can Rundeck change the user to ec2-user and retry if the connection fails with ubuntu user.

The easiest way is to add the AWS .pem key in the key storage and then reference it at the project level: Project Settings > Edit Configuration > Default Node Executor tab (SSH Key Storage Path section) and save. In that way, you can access all nodes directly.
Alternatively, you can add the rundeck user public key to the /home/user/.ssh/authorized_keys file on each remote SSH account (or the user that launches Rundeck if you're using a WAR-based installation).

Related

How to add aws ec2 at rundeck?

I added a new aws-ec2 source at rundeck and also inserted a rundeck-ec2 plugin at the libext folder. Everything is working but this new aws-ec2 node is not showing show at nodes.
Any idea what is happening and how to solve it?
I leave the steps to use the ec2-plugin with Rundeck, in this guide I used Amazon Linux 2 as EC2 instance.
Download the plugin from here, and put the .jar file on Rundeck libext directory.
Check that the plugin is ready, go to the Gear icon (up to right) > Plugins > Installed Plugins, if you use the textbox to find "ec2" you can see that the plugin is well installed on your Rundeck instance.
Now it's time to add your nodes, go to Project Settings (Rundeck sidebar) > Edit Nodes and click on "Add a new Node Source +" button. Then select "AWS EC2 Resources" from list.
So, you need to pass some parameters to connect to your EC2 nodes. Focus on the "Access Key", "Secret Key" and "Endpoint" textboxes.
You can get the first and second one going to your AWS profile link (Up to right on AWS) > "My Security Credentials" > and click on "Create access key" button, that generates the Access Key ID and Secret Key (also, you can use some Access Key ID/Secret access key created before).
The third one ("Endpoint"), it's an HTTPS URL with your instance Endpoint, for example, if your EC2 nodes are in US East (N. Virginia) Zone, you need to put https://ec2.us-east-2.amazonaws.com, you can see all endpoint codes here.
If you click on "Nodes" (Rundeck sidebar) you can see your EC2 nodes listed.
Now, you need to access them, for that you have two methods.
5a. Using the AWS .pem file: Click on the Gear Icon (up to right) > Key Storage and add a new Private key with the .pem file content, give it a name, and save it. Now go to Edit Configuration (Rundeck Sidebar) > Default Node Executor tab and reference your Key Storage entry on "SSH key Storage Path" textbox, save it.
5b. Like any SSH remote node: Just add the rundeck user (or the user that launches Rundeck) public key (id_rsa.pub file) content on the autorized_keys (/home/ec2-user/.ssh/autorized_keys), and now the EC2 node trusts the Rundeck instance.
Run any command against your nodes on the Rundeck's Commands page.
You can see the full documentation here and here a video about Rundeck and EC2 usage.

How to access EC2 Instance even if PEM file is lost

I lost the PEM key to the EC2 Instance.
I followed all the following steps:
HOW TO ACCESS EC2 INSTANCE EVEN IF PEM FILE IS LOST
Accessing the EC2 instance even if you loose the pem file is rather easy.
First, create a new instance by creating new access file, call it 'helper' instance with same region and VPC as of the lost pem file instance.
Now stop the lost pem file instance. Remember not to terminate instance but to stop it.
Go to EBS volumes, select the root volume of the lost pem file instance and detach.
Now again select the detached volume and this time you have to attach this volume to helper instance which we created before. Since helper instance already has a root volume by default as /dev/sda1, the newly attached volume will be secondary(eg: /dev/sdf).
Login to your helper instance with its pem file.
Execute below commands:
# mount /dev/xvdf1 /mnt
# cp /root/.ssh/authorized_keys /mnt/root/.ssh/
# umount /mnt
Detach the secondary volume from helper instance.
Again attach the volume back to our recovery instance. Start the instance. Terminate the helper instance.
Use helper instance pem file to log into recovery instance.
Great to see your answers. Just for the information AWS has shared their official tutorial also for the same hence sharing the same here: https://youtu.be/F8jXE-_hdfg
With this video we can found, AWS support has been getting this same questions from the users and hence made this stuff with detailed structure.
This is with step by step details. Hope this helps.
A few weeks ago AWS announced SSM Session Manager. This allows you to access (login) to your EC2 instances without requiring a key pair, password, open ports, etc. Both Windows and Linux are supported.
The latest AMIs do not have the latest version of the SSM agent. You will need to update that first, which you can also do via the SSM Console or via AWS CLI.
AWS Systems Manager Session Manager
Once you connect to your system, you can then correct any problems that you have. For example, you could create a new keypair in the AWS Console and then copy the public key to `~/.ssh/authorized_keys so that you can once again access your system via SSH.
For Windows systems, you can even change the Administrator password if it has been forgotten. This can be a lifesaver.
In my case auto-scaling group was enabled so it became easy to attach instance to new Key Pair, Here are the steps that I followed
Created new Key pair under EC2 Dashboard -> Key Pairs (download the .pem file in this step)
Go to Auto Scaling -> Launch Configurations
Select required Launch Configuration and then copy launch configuration
Here while reviewing launch configuration you can create a new key pair or you can select the existing key pair that is created at step 1
Once new launch configuration is created go to the auto-scaling group
Select the auto-scaling group then select new launch configuration from the dropdown
Once this is done if you stop the auto-scaling group instance it will create a new one with the new launch configuration (with new key pair)
List item
here are the steps to access EC2 instance on the fly after loss of key pair
Create new instance in same region with new key pair and name it as TEST
now connect to the new instance and copy the data from authorized_keys from .ssh directory (/.ssh/authorized_keys)
go to the security group of lost pem file instance and allow ssh for EC2 instance connect
(please check the ip range for specific region by command curl -s https://ip-ranges.amazonaws.com/ip-ranges.json| jq -r '.prefixes[] | select(.region=="us-east-1") | select(.service=="EC2_INSTANCE_CONNECT") | .ip_prefix')
Once you done with the security group changes connect to the lost file instance by EC2 instance connect
Now open .ssh/authorized_keys and replace it by TEST instance authorized_keys
you can now access your lost key file instance by new key pair
terminate TEST instance and do changes in security group.
Take a note that this solution might expose port 22 of your instance for while.
Thank you.

How to use node specific SSH keys with Rundeck AWS EC2 Resource Plugin?

I have configured a Rundeck server with AWS EC2 Resource Plugin for the nodes.
Now for multiple EC2 instances i am using different SSH keys.
Is there any way to tell Rundeck to use different SSH keys for different EC2 instance(Rundeck Nodes) in same Rundeck Project?
I have tried using AWS Resource Plugin mapping to specify "ssh-keypath" for Rundeck , but found no luck.Is there any alternative for that?
You should be able to use ssh-keypath to point to a local file, or ssh-key-storage-path to point to a location in the Key Storage facility. You can also include things like ${node.instanceId} in the path(s) to dynamically specify the value. For the EC2 provider, you would set that in the "mapping" configuration, as something like ssh-key-storage-path.default=keys/nodes/node-${node.instanceId}.sshkey
If you only specified a static value for the keypath, that same value would be used for all nodes. Is that what you tried?
The alternative is to set the default value in the SSH Node Executor configuration within the Project configuration, again using a dynamic node variable in the keypath. You can do that in the Project Configuration GUI (or the project.properties file contents). If you do that, do not set a value in the EC2 provider which will override the project default.

How to secure an AWS EC2 instance when the SSH key is compromised or lost

I'm essentially an AWS noob.
I had a developer set up an EC2 instance with load balancer to host a node.js-based API. He has now moved on from the company but he still have the private key to log in, if he wanted to. I want to change the keys.
From what I have read, I need to relaunch the instance to get a new key pair. However, if I do this will I lose all the node packages, and other SW that has been installed on the current instance? What will happen with the load balancer? Do I need to need to update my DNS info to point to the new IP?
(Once situated, this time around I will create multiple key pairs for the devs to use.)
Thanks,
Steve
EDIT: Yes, I do have the private key and can do everything I need to. I just want to make sure HE no longer has access.
Take a an AMI of the current instance for backup purposes. This will reboot the instance but it will keep the existing IP. You do not need to remove it from your ELB. You may need this AMI if you you cannot connect back in after changing the key.
Login as the root user, with the existing key.
From the shell, run the following commands:
$ ssh-keygen -t rsa -b 2048 -f user - this generates a new key pair
$ sudo su - - if needed
$ cp /home/ubuntu/.ssh/authorized_keys /home/ubuntu/.ssh/authorized_keys.bak - backup the existing public key
$ mv user.pub /home/ubuntu/.ssh/authorized_keys - this replaces the existing public key in the authorized_keys file
$ chmod 600 /home/ubuntu/.ssh/authorized_keys - Change permissions on the file
Copy the private key (file called user) generated from the $ ssh-keygen command to your local machine and delete it from the instance.
Connect to the instance with the new private key to confirm. IMPORTANT: Keep the existing ssh session open and create a new session with the new key.
If you have any problems on step 10 you still have access to the existing session to troubleshoot.
As for cleanup make sure and remove the old key pair from the AWS console, and invalidate any credentials IF(!) they are not required for the existing services to run. If you granted the developer root access to your AWS console, you should reset those credentials.
NOTE: These steps assume an Ubuntu installation. If you are using any other Linux type, replace \ubuntu with the correct AWS username:
Amazon Linux: ec2-user
Ubuntu ubuntu
Debian admin
RHEL 6.4 ec2-user
RHEL 6.3 root
You can create a new Key Pair without creating a new EC2 instance http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-key-pairs.html#having-ec2-create-your-key-pair
It still looks like you need to launch a new instance of EC2 (which creates a new key), but if you use the same volume(s) or snapshots to create duplicate volumes you shouldn't have to reload any Software.
https://forums.aws.amazon.com/message.jspa?messageID=245314
As for DNS, I would point it to the load balancer, that way you can add/remove servers from the pool without DNS changes. Otherwise, assign an Elastic IP to the server, that way you can move the Elastic IP to the next server without changing DNS each time. Moving Elastic is instant, where DNS takes time to replicate to rough the network. Hope that helps.
So, I have resolved this issue myself, and I'm posting what I did in case it helps anyone else.
On my local machine I made a new 2048 bit RSA key pair (a new pair can also be generated on AWS)
Import the new public key in the Amazon console.
Create an AMI of the running instance.
Launch an new (ubuntu linux) instance of that AMI, and point it to
the newly uploaded public key for login.
Once the instance is up, update Load Balancer, or DNS entries
to point to the new instance, as appropriate.
Start whatever software the server is intended to run.

How to limit access to a shared ami

I need to share an ami so that it can be used by a client to create their own instances through their own account. However, I do not wish that client to be able to ssh in to the instance. I will need to be able to ssh into the instance to be able to maintain it. They will have ftp and www access only. I've got the ftp and www access part working through ssh configuration. How do I keep them out of ssh when they are starting up the instance with their own keypairs?
Well, you can't really prevent this, if they are determined to get in, since they control the instance.
Stop instance → unmount root EBS volume → mount elsewhere → modify contents → unmount → remount → restart → pwn3d.
However, according to the documentation, if you don't configure the AMI to load their public key, it just sits there and doesn't actually work for them.
Amazon EC2 allows users to specify a public-private key pair name when launching an instance. When a valid key pair name is provided to the RunInstances API call (or through the command line API tools), the public key (the portion of the key pair that Amazon EC2 retains on the server after a call to CreateKeyPair or ImportKeyPair) is made available to the instance through an HTTP query against the instance metadata.
To log in through SSH, your AMI must retrieve the key value at boot and append it to /root/.ssh/authorized_keys (or the equivalent for any other user account on the AMI). Users can launch instances of your AMI with a key pair and log in without requiring a root password.
If you don't fetch the key and append it, as described, it doesn't appear that their key, just by virtue of being "assigned" to the instance, will actually give them any access to the instance.
I was able to finally accomplish this crazy task by using this process:
1) Login as ubuntu
2) create a user, belongs to sudo and admin group
3) install all my s/w under the newuser
4) verify that new user has all required privileges
5) chroot jail the ubuntu user to ftp access only
when the ami is transmitted to the new zone/account, the ubuntu user exists as a sudoer, but cannot ssh into the instance. Ftp allows them to connect to the system, but they view a bare directory and cannot cd anywhere else in the system.
It's not a complete denial, but I think it will serve the purpose for this client.