We are evaluating AWS for our cloud usage.
However our corp proxy is blocking the access of instances via SSH /RDP.
I checked with Ops team and they said they will allow ports for SSH and RDP. But I have to give one source subnet and only one destination subnet /IP.
I am in ap-southeast-2 zone and there are nearly 27 subnets as per this including Global zones(For S3).
AWS Subnets -Regionwise
Question
1) Is there a way I can force AWS to create instances with in a particular IP range. ( I am thinking if I give one of the subnet from the 27, then I can give those subnet as destination to our Ops team.)
2) Can I do RDP Jumpbox ? meaning can I create one EC2 instance and then give IP of that machine to Ops for allowing access and then using that machine to RDP / SSH to other instances?
Please let me know the other options and your suggestions.
Thanks in advance.
There is no way to launch instances in a particular public IP CIDR and unless your CIDR is large that may include non-AWS IPs. One option is to allocate a bunch of elastic IPs (you are limited to only 10 EIPs by default), keeping the EIPs in the same /24 or /21 and releasing the rest. Even then there is no guarantee to get EIPs in the same small CIDR.
SSH with Jumphost is easy and used by many. RDP with Jumphost may be possible but I am not sur how it can be done.
I have found a way with the help of my colleague.
Luckily we had a guest network and we used that to achieve this.
Steps
1) I connected to both the LAN network and the guest network.
2) We referenced Add Static Route- Microsoft documentation
route add "destination" mask "subnetmask" "gateway" metric "costmetric" if interface
eg - route add ec2InstanceIP mask 255.255.255.255 AlternateNetworkgateway METRIC 10
Now I am able to do ssh, but RDP is not possible. Will explore and update if I find a way.
Related
I need to know what IP subnet will be used by AWS EC2 instances
Reading:
https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
https://superuser.com/questions/989123/amazon-ec2-public-ip-address/989400
I understand that I can use this URL giving the IP ranges:
https://ip-ranges.amazonaws.com/ip-ranges.json
But I am a bit confused by the output.
I understand I need to filter on:
region
type: EC2
Is my understanding correct ?
If so I get 137 IP subnets which is very important number.
How can I have more control on IP subnets ?
It will be extremely fragile to base your firewall system on the possible IPs that any EC2 instance in those regions can have.
Allowing access from 1 region may arguably have been fine but you're opening up your firewall to five traffic-heavy & very popular AWS regions.
That blanket policy allowing all traffic will essentially defeat the purpose of having a firewall that should only allow specific traffic through.
If you're actually looking for security, assign your EC2 instances an Elastic IP (EIP) and allow traffic only from those particular IPs.
Depending on how many EC2 instances you have, it may also be much easier, quicker & cheaper (however small) to route all your EC2 instances through 1 NAT gateway with 1 EIP (if you don't need all your EC2 instances to have different public IPs).
You'll save yourself the headache of keeping up to date with possible IP range changes made by Amazon, have cleaner firewall policies and have tighter security by only letting traffic that you're sure is coming from your instances through.
Win, win, win.
Is there an alternative to AWS's security groups in the Google Cloud Platform?
Following is the situation which I have:
A Basic Node.js server running in Cloud Run as a docker image.
A Postgres SQL database at GCP.
A Redis instance at GCP.
What I want to do is make a 'security group' sort of so that my Postgres SQL DB and Redis instance can only be accessed from my Node.js server and nowhere else. I don't want them to be publically accessible via an IP.
What we do in AWS is, that only services part of a security group can access each other.
I'm not very sure but I guess in GCP I need to make use of Firewall rules (not sure at all).
If I'm correct could someone please guide me as to how to go about this? And if I'm wrong could someone suggest the correct method?
GCP has firewall rules for its VPC that work similar to AWS Security Groups. More details can be found here. You can place your PostgreSQL database, Redis instance and Node.js server inside GCP VPC.
Make Node.js server available to the public via DNS.
Set default-allow-internal rule, so that only the services present in VPC can access each other (halting public access of DB and Redis)
As an alternative approach, you may also keep all three servers public and only allow Node.js IP address to access DB and Redis servers, but the above solution is recommended.
Security groups inside AWS are instance-attached firewall-like components. So for example, you can have a SG on an instance level, similar to configuring IP-tables on regular Linux.
On the other hand, Google Firewall rules are more on a Network level. I guess, for the level of "granularity", I'd say that Security Groups can be replaced to instance-level granularity, so then your alternatives are to use one of the following:
firewalld
nftables
iptables
The thing is that in AWS you can also attach security groups to subnets. So SG's when attached to subnets, are also kind of similar to google firewalls, still, security groups provide a bit more granularity since you can have different security groups per subnet, while in GCP you need to have a firewall per Network. At this level, protection should come from firewalls in subnets.
Thanks #amsh for the solution to the problem. But there were a few more things that were required to be done so I guess it'll be better if I list them out here if anyone needs in the future:
Create a VPC network and add a subnet for a particular region (Eg: us-central1).
Create a VPC connector from the Serverless VPC Access section for the created VPC network in the same region.
In Cloud Run add the created VPC connector in the Connection section.
Create the PostgreSQL and Redis instance in the same region as that of the created VPC network.
In the Private IP section of these instances, select the created VPC network. This will create a Private IP for the respective instances in the region of the created VPC network.
Use this Private IP in the Node.js server to connect to the instance and it'll be good to go.
Common Problems you might face:
Error while creating the VPC Connector: Ensure the IP range of the VPC connector and the VPC network do not overlap.
Different regions: Ensure all instances are in the same region of the VPC network, else they won't connect via the Private IP.
Avoid changing the firewall rules: The firewall rules must not be changed unless you need them to perform differently than they normally do.
Instances in different regions: If the instances are spread across different regions, use VPC network peering to establish a connection between them.
I have successfully setup an IPsec VPN between 2 VPCs from 2 different regions via Strongswan and the 2 gateways are able to connect.
The problem is that the other instances of a vpc/subnets are not able to ping the other vpc/subnet:
VPC A/gateway can talk to VPC B/gateway...
VPC A/Instance can talk to VPC A/Gateway
Same applies for VPC B... But
VPC A / Instance can NOT talk to VPC B/Gateway B or VPC B/ Instances ( Same applies for VPC B to VPC A).
I have checked and tried to play with the routes of table 220 and also ICMP redirects, no way.
Anyone can assist please?
Regards.
There is way too little information to provide an exact answer; topology and addressing plan, relevant security groups and EC2 configuration, StrongSwan and relevant Linux kernel configuration would be needed.
Still please let me offer a few hints what to do in order to allow routing among subnets connected via VPN:
IP forwarding must be enabled in Linux kernel, assuming the StrongSwan runs on Linux EC2 instance. It can be done with following command, run as root:
echo 1 > /proc/sys/net/ipv4/ip_forward
Please note that the setting would not persist during a reboot. How to make the setting persistent depends on the Linux distribution.
EC2 source/dest. check must be disabled, see the screenshot below.
VPC routing tables must be set to route the traffic to the another subnet in another region via the StrongSwan EC2 node, instead of via default gateway.
Traffic selectors (left_subnet and right_subnet) in ipsec.conf must be set accordingly.
I need to add multiple ENIs to an EC2 instance and would like to use each interface with multiple private and associated elastic IPs. My current EC2 instance allows for multiple network interfaces and multiple EIPs per interface. I have already created and connected the ENIs and assigned additional private and elastic IPs. The problem comes when I try to bind to the EIPs on the ENIs (eth1, eth2...) for outbound traffic. The bind is successful, however the outbound request times out.
I am able to add multiple IPs to the default network interface (eth0) of my EC2 instance and was also able to send outbound traffic using those IPs. It required me executing the command below for each new IP, but it work.
ip addr add dev eth0 xxx.xx.x.xxx/24
Does anyone know how to get this to work? I suspect my route table or some other network configuration needs to be updated, however this is out of my wheelhouse. If there is an automated why or script that I can run that would be even better.
Thanks in advance.
Got my answer! I found this blog post which had everything I needed to do. Good luck to those who are looking for something similar.
http://randomizedsort.blogspot.com/2012/06/poor-mans-static-ip-for-ec2-aka-elastic.html
I want to access a few instances in my private subnet using EIPs. Is there a way? I know it doesn't make much sense. But let me explain in detail.
I have a VPC with 2 subnets.
1) 192.168.0.0/24 (public subnet) has EIPs attached to it
2) 192.168.1.0/24 (private subnet)
There is a NAT instance between these to allow the private instances have outbound access to the internet. Everything works fine as mentioned here : http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
But now, for a temporary time I need to address the instances on the private subnet directly from the internet using a EIP.
Is this possible by setting up new route tables for that particular instance alone? or anything else?
Here are the limitations :
1) There can't be any downtime on any instances on the private subnet
2) Hence it goes without saying, I can't create a new subnet and move these instances there.
It should be as simple as -> Attach. Use . Remove.
The only other way I have right now is some kind of port fowarding on iptables from instances on the public subnet (which have EIP) to any instance on private subnet... But this looks messy .
Any other way to do it ?
Of course, the stuff in the private subnet is in the private subnet because it shouldn't be accessible from the Internet. :)
But... I'm sure you have you reasons, so here goes:
First, no, you can't do this in a straightforward attach → use → remove way, because each subnet has exactly one default route, and that either points to the igw object (public subnet) or the NAT instance (private subnet). If you bind an elastic IP to a machine in the private subnet, the inbound traffic would arrive at the instance, but the outbound reply traffic would be routed back through the NAT instance, which would either discard or mangle it, since you can't route asymmetrically through NAT, and that's what would happen here.
If your services are TCP services (http, remote desktop, yadda yadda) then here's a piece of short term hackery that would work very nicely and avoid the hassles of iptables and expose only the specific service you need:
Fire up a new micro instance with ubuntu 12.04 LTS in the public subnet, with an EIP and appropriate security group to allow the inbound Internet traffic to the desired ports. Allow yourself ssh access to the new instance. Allow access from that machine to the inside machine. Then:
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install redir
Assuming you want to send incoming port 80 traffic to port 80 on a private instance:
$ sudo redir --lport=80 --cport=80 --caddr=[private instance ip] --syslog &
Done. You'll have a log of every connect and disconnect with port numbers and bytes transferred in your syslogs. The disadvantage is that if your private host is looking at the IP of the connecting machine it will always see the internal IP of the private network instance.
You only have to run it with sudo if you're binding to a port below 1024 since only root can bind to the lower port numbers. To stop it, find the pid and kill it, or sudo killall redir.
The spiffy little redir utility does its magic in user space, making it simpler (imho) than iptables. It sets up a listen socket on the designated --lport port. For each inbound connection, it forks itself, establishes an outbound connection to the --caddr on --cport and ties the two data streams together. It has no awareness of what's going on inside the stream, so it should work for just about anything TCP. This also means you should be able to pass quite a lot of traffic through, in spite of using a Micro.
When you're done, throw away the micro instance and your network is back to normal.
Depending on your requirements, you could end up putting in a static route direct to the igw.
For example, if you know your source on the internet from which you want to allow traffic, you can put in the route x.x.x.x/32 -> igw into your private routing table. Because your instance has a EIP attached it will be able to reach the igw, and traffic out to that destination will go where it should and not the NAT.
I have used this trick a few times for short term access. Obviously this is a short term workaround and not suitable for prod environments, and only works if you know where your internet traffic is coming from.
I suggest you setup a VPN server. This script creates a VPN server without having to do much work: https://github.com/viljoviitanen/setup-simple-openvpn
Just stop and start as required.
1-use of redir utility from a temporary EC2 instance to the NAT private subnet.
For this option consider that is the least intrusive. Is possible to make it persistent by creating a system service so in case of reboot the socket will be created again.
2-static routing table
this requires medium to advanced knowledge on the AWS VPC and depending on the case you might need to deal with AWS Route 53
3-VPN:
It could mean dealing with the Amazon IGW plus some extra steps.
The best solution for me was 1 plus different port mapping, creating a DNS record in AWS 53, security groups restrictions. The requirement is the opposite: to leave the connection constant for certain users to access on daily basis and at some point being able to stop the EC2 instance.