Why is a Bastion Host more secure? - amazon-web-services

I've read that the best security practice for making EC2 instances Internet-accessible is to place them in a private VPC subnet, create a Bastion host in a public VPC subnet and use a security group to only allow connections from the Bastion Host and then do key forwarding to login to private instances.
However, it seems AWS offers various configurations which seem to provide similar functionality to an actual Bastion host. For instance using a Security group on a public subnet seems pretty good, and if someone gets access to your Bastion it seems likely that they're not far away from your private keys. In any case, is there anywhere I could find more info on this topic?

It's a matter of minimizing attack surface.
With a bastion host your only exposure to the open internet (ex any load balancers) is port 22, which is backed by a relatively trustworthy piece of software.
It's also a single point of management: you define one security group that identifies IP addresses that are allowed to contact the bastion, and you create a single authorized_keys file that contains public keys of your authorized users. When a user leaves, you delete a line from each.
By comparison, if you rely solely on security groups to protect publicly-accessible hosts, you need to replicate the same settings on every group (and remove/update them as needed). And if you allow SSH access to those hosts, you have to distribute the authorized_keys file after every change.
Although I can't recommend doing this, it's at least rational to open port 22 on the bastion host for world access. If you have a lot of users, or those users connect via tethered cellphones, it may even be reasonable. That's something that you'd never, ever want to do with arbitrary services.

You can find best practices of using Bastion Host here: https://docs.aws.amazon.com/quickstart/latest/linux-bastion/architecture.html
Access to the bastion hosts are locked down to known CIDR scopes for
ingress. This is achieved by associating the bastion instances with a
security group. The Quick Start creates a BastionSecurityGroup
resource for this purpose.
Ports are limited to allow only the necessary access to the bastion
hosts. For Linux bastion hosts, TCP port 22 for SSH connections is
typically the only port allowed.
Note that it is pretty common to create an SSH tunnel to connect to a given resource through your Bastion Host: https://myopswork.com/transparent-ssh-tunnel-through-a-bastion-host-d1d864ddb9ae
Hope it helps!

Let me answer this question in a more simple way.
First one needs to understand the bastion host (some people call it a jump box).
The trick is that only one server, the bastion host, can be accessed via SSH from THE INTERNET (it should be restricted to a specific source IP address). All other servers can only be reached via SSH from the bastion host.
This approach has three security advantages:
You have only one entry point into your system, and that entry point does nothing but SSH. The chances of this box being hacked are small.
If one of your web servers, mail servers, FTP servers, and so on, is hacked, the attacker can’t jump from that server to all the other servers.
It’s important that the bastion host does nothing but SSH, to reduce the chance of it becoming a security risk
Hope this help to understand others!

Related

AWS Network load balancer or EC2 Bastion host

One of the data providers, only offers transfer to an FTP server.
To test connection I started an FTP server in a public subnet and opened port:21 in Security Group, unfortunately the data did not reach there, so in VPC Flow Logs I checked that apart from port:21, there are other different ports that need to be opened, but they change so often that I am not able to add all of them to Security Group:
I want my ftp server in private subnet and some sort of network interface to handle incoming connections.
Therefore I want to set up either Network load balancer or EC2 Jump host (I need Bastion host because I don't want to assign elastic IP to another instance, just one with elastic IP and then rest of instances in private subnet).
Network load balancer has ports that it listens to, due to the fact that there are a lot of ports and they change, I am not able to add them all. Is there a way to bypass this?
The second approach is to setup an EC2 Bastion Host that would accept all connections but forward what is on port 21
Does this even make sense? Is there any pattern that is easier?
If you have any amount of choice of approach, commit to SFTP instead, last time I saw it it did do its job, exactly through bastion in a setup similar to yours.
https://dev.to/tanmaygi/how-to-create-a-sftp-server-on-ec2centosubuntu--1f0m

Can any server be used as a bastion host?

I have a private RDS instance that I want to connect to using bastion host.
I've found a couple of tutorial on how to set it up which doesn't seem too advanced, but I struggle to understand what a bastion host actually is.
All the tutorials I've seen just creates an empty ec2 instance (bastion host) and edit the RDS security group to allow incoming traffic from it and voila, connection from local machine is working.
What I struggle to understand is that there's no configuration on the ec2 instance that enables this behaviour.
Wouldn't that mean that any server that have access to RDS could be used as a bastion host?
For example, I have an EKS cluster where I host a couple of services.
Some of these services are supposed to have access to RDS.
So in order for the services to access RDS I put RDS in the same VPC and Security Group as eks-nodegroups.
Even though the services that need access to RDS aren't publicly accessible, there are publicly accessible services that are running in the same VPC and Security Group.
Would I then be able to use one of the publicly accessible services as a bastion host in order to gain access to RDS from anywhere, thus making it public?
From Bastion - Wikipedia:
A bastion or bulwark is a structure projecting outward from the curtain wall of a fortification, most commonly angular in shape and positioned at the corners of the fort:
It 'sticks out' from the walled portion of the city and provides added security by being able to target attackers attempting to scale the wall. In a similar way, a bastion host 'sticks out' from a walled computer network, acting as a secure connection to the outside world.
When using an Amazon EC2 instance as a Bastion Host, users typically use SSH Port Forwarding. For example, if the Amazon RDS database is running on port 3306, a connection can be established to the Bastion server like this:
ssh -i key_file.pem ec2-user#BASTION-IP -L 8000:mysql–instance1.123456789012.us-east-1.rds.amazonaws.com:3306
This will 'forward' local port 8000 to the bastion, which will then forward traffic to port 3306 on the database server. Thus, you can point an SQL client to localhost:8000 and it would connect to the Amazon RDS server. All software for making this 'port forward' is part of the Linux operating system, which is why there is no configuration required.
Yes, you can use anything as a Bastion Host, as long as it has:
The ability to receive incoming connections from the Internet
The ability to (somehow) forward those requests to another server within the VPC
A Security Group that permits the inbound traffic from the Internet (or preferably just your IP address), and the target resource permits incoming traffic from this security group

Using NACL to Block traffic

I have a application on EC2 Instance which connects to a website (github.com) to download application repository (say thrice a week or bit more frequently).
I like to block the access to my VPC using NACL; So no traffic other than from this website github.com (keeping in view that NACL are stateless) can go through.
The issue i am facing is that i cannot whitelist a website using NACL; since the IP based approach is not workable (the IP's are always changing).
Can someone suggest a better solution or a fix that we can apply here.
NACL cannot resolve DNS as this requires further OSI layer that has information about the HTTP protocol details.
One option you can do here is to place your EC2 instance behind a NAT gateway, thus effectively placing it in a private subnet and it would translate to an IP that will not change when facing the public internet such as an Elastic IP. In this way, you will be able to protect your EC2 instances while referencing a consistent IP address.
Another option is to use ssh-keygen to generate a public and private key pair which you will then copy over to the respective git repo (SSH key), then block any other protocols and traffic after establishing that one-to-one trust. A more secured version of this is tackled nicely in this post: EC2 can't SSH into github

Amazon EC2 Security Group with Host / Dynamic IP / DNS

I am seeking some guidance on the best approach to take with EC2 security groups and services with dynamic IP's. I want to make use of services such as SendGrid, Elastic Cloud etc which all use dyanmic IP's over port 80/443. However access to Port 80/443 is closed with the exception of whitelisted IPs. So far the solutions I have found are:
CRON Job to ping the service, take IP's and update EC2 Security Group via EC2 API.
Create a new EC2 to act as a proxy with port 80/443 open. New server communicates with Sendgrid/ElasticCloud, inspects responses and returns parts to main server.
Are there any other better solutions?
Firstly, please bear in mind that security groups in AWS are stateful, meaning that, for example, if you open ports 80 and 443 to all destinations (0.0.0.0/0) in your outbound rules, your EC2 machines will be able to connect to remote hosts and get the response back even if there are no inbound rules for a given IP.
However, this approach works only if the connection is always initiated by your EC2 instance and remote services are just responding. If you require the connections to your EC2 instances to be initiated from the outside, you do need to specify inbound rules in security group(s). If you know a CIDR block of their public IP addresses, that can solve the problem as you can specify it as a destination in security group rule. If you don't know IP range of the hosts that are going to reach your machines, then access restriction at network level is not feasible and you need to implement some form of authorisation of the requester.
P.S. Please also bear in mind that there is a soft default limit of 50 inbound or outbound rules per security group.

AWS - Accessing instances in private subnet using EIP

I want to access a few instances in my private subnet using EIPs. Is there a way? I know it doesn't make much sense. But let me explain in detail.
I have a VPC with 2 subnets.
1) 192.168.0.0/24 (public subnet) has EIPs attached to it
2) 192.168.1.0/24 (private subnet)
There is a NAT instance between these to allow the private instances have outbound access to the internet. Everything works fine as mentioned here : http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
But now, for a temporary time I need to address the instances on the private subnet directly from the internet using a EIP.
Is this possible by setting up new route tables for that particular instance alone? or anything else?
Here are the limitations :
1) There can't be any downtime on any instances on the private subnet
2) Hence it goes without saying, I can't create a new subnet and move these instances there.
It should be as simple as -> Attach. Use . Remove.
The only other way I have right now is some kind of port fowarding on iptables from instances on the public subnet (which have EIP) to any instance on private subnet... But this looks messy .
Any other way to do it ?
Of course, the stuff in the private subnet is in the private subnet because it shouldn't be accessible from the Internet. :)
But... I'm sure you have you reasons, so here goes:
First, no, you can't do this in a straightforward attach → use → remove way, because each subnet has exactly one default route, and that either points to the igw object (public subnet) or the NAT instance (private subnet). If you bind an elastic IP to a machine in the private subnet, the inbound traffic would arrive at the instance, but the outbound reply traffic would be routed back through the NAT instance, which would either discard or mangle it, since you can't route asymmetrically through NAT, and that's what would happen here.
If your services are TCP services (http, remote desktop, yadda yadda) then here's a piece of short term hackery that would work very nicely and avoid the hassles of iptables and expose only the specific service you need:
Fire up a new micro instance with ubuntu 12.04 LTS in the public subnet, with an EIP and appropriate security group to allow the inbound Internet traffic to the desired ports. Allow yourself ssh access to the new instance. Allow access from that machine to the inside machine. Then:
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install redir
Assuming you want to send incoming port 80 traffic to port 80 on a private instance:
$ sudo redir --lport=80 --cport=80 --caddr=[private instance ip] --syslog &
Done. You'll have a log of every connect and disconnect with port numbers and bytes transferred in your syslogs. The disadvantage is that if your private host is looking at the IP of the connecting machine it will always see the internal IP of the private network instance.
You only have to run it with sudo if you're binding to a port below 1024 since only root can bind to the lower port numbers. To stop it, find the pid and kill it, or sudo killall redir.
The spiffy little redir utility does its magic in user space, making it simpler (imho) than iptables. It sets up a listen socket on the designated --lport port. For each inbound connection, it forks itself, establishes an outbound connection to the --caddr on --cport and ties the two data streams together. It has no awareness of what's going on inside the stream, so it should work for just about anything TCP. This also means you should be able to pass quite a lot of traffic through, in spite of using a Micro.
When you're done, throw away the micro instance and your network is back to normal.
Depending on your requirements, you could end up putting in a static route direct to the igw.
For example, if you know your source on the internet from which you want to allow traffic, you can put in the route x.x.x.x/32 -> igw into your private routing table. Because your instance has a EIP attached it will be able to reach the igw, and traffic out to that destination will go where it should and not the NAT.
I have used this trick a few times for short term access. Obviously this is a short term workaround and not suitable for prod environments, and only works if you know where your internet traffic is coming from.
I suggest you setup a VPN server. This script creates a VPN server without having to do much work: https://github.com/viljoviitanen/setup-simple-openvpn
Just stop and start as required.
1-use of redir utility from a temporary EC2 instance to the NAT private subnet.
For this option consider that is the least intrusive. Is possible to make it persistent by creating a system service so in case of reboot the socket will be created again.
2-static routing table
this requires medium to advanced knowledge on the AWS VPC and depending on the case you might need to deal with AWS Route 53
3-VPN:
It could mean dealing with the Amazon IGW plus some extra steps.
The best solution for me was 1 plus different port mapping, creating a DNS record in AWS 53, security groups restrictions. The requirement is the opposite: to leave the connection constant for certain users to access on daily basis and at some point being able to stop the EC2 instance.