Best Practice: NAT vs ElasticIP - amazon-web-services

I have two basic setup for web application that reside behind ELB on Amazon Web Service.
Layout A:
+-----+
+---+ ELB +----+
| +-----+ |
| |
| |
+---v-----+ +-----v---+ +---------------+
| EC2/EIP | | EC2/EIP +----+----> | HTTP RESPONSE |
+---------+ +---------+ | +---------------+
|
| +------------------+
+----> | EXTERNAL WEBSITE |
| +------------------+
|
| +-----+
+----> | API |
+-----+
Layout B:
+-----+
+---+ ELB +----+
| +-----+ |
| |
| |
+--v--+ +--v--+ +-----+ +---------------+
| EC2 | | EC2 +--+ NAT +--+----> | HTTP RESPONSE |
+-----+ +-----+ +-----+ | +---------------+
|
| +------------------+
+----> | EXTERNAL WEBSITE |
| +------------------+
|
| +-----+
+----> | API |
+-----+
I believe both architecture have pros and cons:
Layout A:
Does the web server send http response back to ELB? if it goes directly to user, will it gain performance response?
If I limit outgoing traffic for Http port only on security group, is there still any security threat?
Layout B:
is this design creating another layer of point of failure (NAT)?
Will it work for Oauth communication?
Can it work with 3rd party CI and Orchestration tools (jenkins, chef)?
Both design are working well, but which design is the best practise for infrastructure considering performance and security.
thanks

The short answer is that in both cases the traffic that hits the ELB is going back out through the ELB.
For layout A:
for the requests that originate through the ELB only the inbound port matters as far as the SG is concerned.
for other things that originate on the EC2 instances and do traffic to the outside world you would need to open the ports that the services use
For layout B:
yes the NAT is a single point of failure. If you lose it you lose connectivity to the outside world.
yes. to the outside world the traffic will show as originating in the NAT box.
normally (in a normal setup) for inbound requests to your service you go through an ELB.
for traffic that needs to go outside and is originating in the VPC, you go through a NAT. to address single point of failures you have the option of high availability NAT setups, or if you run multi-region and you app is designed to support region failures you just need to monitor and catch NAT machine failures.
The big advantage of using a NAT is that not all machines that need to do outside traffic need to have an EIP and also the NAT machine can run a security hardened image. You basically set a clear boundary for your VPC and you can better secure it.

Related

Flat network doesn´t work - Openstack Train

I am following this tutorial: https://docs.openstack.org/install-guide/launch-instance-networks-selfservice.html (https://docs.openstack.org/install-gu...) I created the provider network, and the self-service network The self service network is working without any issue (instance gets the internal ip and uses the router to access internet)
If I connect the instance directly to provider network, the dhcp do not works. I can manually assing a public IP address to the instance and it can access internet, but I cant ping the router or any other instance in the same (or other) compute host.
I disabled firewalld. Using centos 7, openstack train release.
Let me try to explain better:
I created the provder network with dhcp enabled:
openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
openstack subnet create --network provider --allocation-pool start=X.X.X.101,end=X.X.X.250 --dns-nameserver 8.8.4.4 --gateway X.X.X.1 --subnet-range X.X.X.0/24 provider
And created the router:
openstack router create router
openstack router set router --external-gateway provider
ip netns:
qrouter-c7fa637e-89bc-4540-9c7a-d890267d176b (id: 2)
qdhcp-54ffc14e-8e9b-42fc-a932-c854ac31876d (id: 1)
qdhcp-19ea13ae-4b2f-4f20-895d-8bbdb266fe88 (id: 0)
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------+--------+
| 7b1c9780-2021-4768-ae78-7585825b3b08 | | fa:16:3e:c7:df:b8 | ip_address='172.16.1.1', subnet_id='30201c79-7d31-4452-b74b-3a06742d4f94' | ACTIVE |
| bfc6a219-a7f8-4d46-a6e7-6c30b0e4569e | | fa:16:3e:8c:a7:4f | ip_address='X.X.X.106', subnet_id='65ae3be8-b92b-4b88-80ad-b42822d8a93e' | ACTIVE |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------+--------+
This is the result after creating the instance:
openstack server list
+--------------------------------------+----------+---------+--------------------------+-------+--------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+----------+---------+--------------------------+-------+--------+
| 6d0d7fc7-dd07-42e8-affa-27cbe900a7cf | teste4-1 | ACTIVE| provider=X.X.X.X.103 | | hufe |
| b5b711c6-e8c5-483d-904a-264bc3a0f08c | teste4-2 | ACTIVE | provider=X.X.X.101 | | hufe |
+--------------------------------------+----------+---------+--------------------------+-------+--------+
The problem:
The instances did not get the ip.
If I manually set the ip on the instances, The instance cant ping my router ip (x.x.x.106)
The instances cant ping each other (Ive manually set the ip on both instances)
Both instances can ping my gateway ip (that is outisde openstack infrastructure)
I can ping the router from outsite openstack infrastructure
Can´t find any error in the logs. Does someone have an idea what is happening?

Amazon AWS EC2 disable access to installed application using public IP4

I created a new instance of amazon aws EC2.
I installed an apache2 web server, with wordpress app.
I configured my domain name, and added a load balancer to redirect to https using an amazon public ssl certificate.
All work perfectly and I can access to my web site using https://mysiteweb.com/
Even when I access to my app site http://mysiteweb.com, the redirection is performed to https://.
The prorblem is I can still access to my app using the EC2 public IP4: http://XX.XXX.XXX.XX and here no redirection id performed.
Same think with the DNS public (IP4): ec2-XX-XX-XX-XX.compute-1.amazonaws.com, no redirection here also.
How can resolve this.
Thank you.
You should update the security group of your instance to only allow inbound access on port 80/443 from the security group attached to the load balancer.
Your load balancer has at least one security group attached such as that below
sg-123456
INBOUND RULES
| Protocol | Port | Source |
--------------------------------
| TCP | 80 | 0.0.0.0/0 |
| TCP | 443 | 0.0.0.0/0 |
You would then update the instance security group to match the below example here sg-123456 is the load balancers security group.
sg-123457
INBOUND RULES
| Protocol | Port | Source |
--------------------------------
| TCP | 80 | sg-123456 |
| TCP | 443 | sg-123456 |
By doing this you prevent anything other than the load balancer performing any HTTP requests on your instance.
You can further increase security of your instance and prevent this scenario by moving your instance into a private subnet so that no one is able to connect to it publicly.
In addition configure the web server you're running to redirect any host name that is not the target hostnames to be the hostname you're expecting.
This can be accomplished by adding a default VHOST that catches any requests, this will be the first that you have added in web servers such as Apache and Nginx. Then add an additional vhost with the ServerAlias set to the domain you're anticipating the user landing on.
By doing this it prevents crawls on your load balancer returning your site.
The issue could be rectified, by configuring the security group (SG) for your EC2 instance should be configured to allowed incoming connections from the SG of your load balancer:
Security groups for your Application Load Balancer
Security groups for instances in a VPC

Changing Ansible dynamic inventory order based on target group health check status

I'm trying to set up rolling deployments on a AWS EC2 2-node cluster behind an ALB via Ansible. The rough process goes like this for each node in serial —
+----------+ +----------+ +----------+
|Remove app| |Redeploy | |Add back |
|from load |-->|new app |-->|to load |
|balancer | | | |balancer |
+----------+ +----------+ +----------+
I use Ansible dynamic inventory to select my nodes, and they're sorted by IP address by default. Now consider these 4 scenarios right before deployment —
Both nodes are healthy.
Node #1 is unhealthy and node #2 is healthy.
Node #1 is healthy and node #2 is unhealthy.
Both nodes are unhealthy.
Now under scenario #3, I'd end up removing the only healthy node. How do I avoid this?
either add a step to your playbook that performs a sanity check that doesn't permit you to remove a node from the loadbalancer if the remaining number of healthy nodes would be less than one
or
work out how to preferentially remove unhealthy nodes from the load balancer first. Can you split the nodes into groups by their health-check status, then process unhealthy nodes first? Or alternately change the sort order so that it's by health check status rather than by ip address?
As an alternate methodology - can you not add new nodes before removing old ones?
OK, since you have a healthy flag, use it to inculde an update role:
- hosts: all
gather_facts: yes
tasks:
- include_role:
role: update
when: not healthy
- include_role:
role: update
when: healthy
This way, the unhealthy ones are done first.

How can I configure my EC2 instance running DPDK to filter traffic between elastic IP and another ec2 instance?

I have a hardware setup that I need to simulate on AWS. In hardware, I have a customer's computer connected to the internet via a cable modem. In between the cable modem and the customer's computer I insert my computer running DPDK and a packet filter application. All packets from the cable modem enter my computer/dpdk at Int-1, are processed, and leave my computer on Int-2 to go to the customer's system. The same data path is traversed in reverse for packets originating from the customer's system. Packets simply follow the ethernet cables to where they are supposed to go.
I need to replicate that in the AWS cloud, but do not have ethernet cables to force the routing of packets. I need to insert my EC2 instance running DPDK between an Elastic IP and the customer's EC2 instance with a private IP. The setup looks like this:
VPC
+------------------------------------------------+
| |
| c5.2xlarge EC2 t2.micro EC2 |
| +--------------------+ +----------------+ |
| | My ec2 with DPDK | | Customer ec2 | |
EIP 1.2.3.4 <---> Int-1 10.0.1.101 | | | |
| | ^ | | | |
| | | | | | |
| | v | | | |
| | <processing> | | | |
| | ^ | | | |
| | | | | | |
| | v | | | |
| | 10.0.2.101 Int-2 <---> 10.0.1.89 eth0 | |
| | | | | |
| +--------------------+ +----------------+ |
+------------------------------------------------+
This is running on centos7.
When DPDK is running ens6 becomes Int-1, and ens7 becomes Int-2.
The EIP 1.2.3.4 used to be attached to the customer's private IP 10.0.1.89, so internet users had access to the customer's ec2, and the customer's ec2 users had access to the internet.
After my ec2 instance is added to the VPC, and the EIP is detached from the customer's ec2 and reattached to my ec2, now I want to filter traffic in both directions through to and from the customer's ec2.
If my ec2 was not running DPDK I could simply use iptables to NAT traffic in both directions. But with DPDK I need a user-space NAT that runs on my ec2, or I need some other way to route packets from the EIP to my Int-1 interface, and then out the Int-2 interface to the customer ec2, and back.
There are many purported DPDK tcp/ip stacks out there, but none really seem to work for one reason or another. I would love to make this work with AWS routing alone, and no NAT, but don't know if that is possible.
Help!
To implement a basic NAT you don't need a TCP/IP stack. Just parse each frame down to the IP header, and substitute any IP 1.2.3.4 to 10.0.1.101 and vice-versa. Then just set the mbuf.ol_flags to recalculate the checksums in the NIC or do it in software and you're done.
Please see the Mbuf library and rte_ipv4_udptcp_cksum() for more details regarding the checksums.
Another issue is that your DPDK filtering application works as an L3 device (i.e. router), while it might be much simpler if it worked as a transparent L2 device (i.e. transparent bridge). This will eliminate the need of extra route on the gateway.

What are the differences between Network and HTTP(s) load balancer in GCP

GCP provides two load balancers namely Network and HTTP(s) where the former works on layer 4 and the later works on layer 7.
There is also a documentation which states that even HTTP traffic can be load balanced by a network load balancer. This slightly confuses which load balancer to choose for a web app in GCP. It is better to understand the differences before selecting one for the project.
What are the differences between them based on the workflow, setup, region/zone based, options for session affinity, and other settings?
Network load balancer Vs HTTP(s) Load Balancer
+---------------------+------------------------------------------+------------------------------------------------------+
| Category | Network Load Balancing (NLB) | HTTP(S) Load Balancing (HLB) |
+---------------------+------------------------------------------+------------------------------------------------------+
| 1. Region / | NLB supports only within a region. | HLB supports both within cross-region |
| Cross-Region | Does not support cross-region | load balancing. |
| | load balancing | |
+---------------------+------------------------------------------+------------------------------------------------------+
| 2. Load balancing | NLB is based on IP address, port | HLB is based only on HTTP and HTTPS |
| based on | and protocol type. Any TCP/UDP | protocols. |
| | traffic, even SMTP can be | |
| | load balanced. | |
+---------------------+------------------------------------------+------------------------------------------------------+
| 3. Packet | Packet inspection is possible and | HLB cannot inspect packets. |
| inspection | load balance based on packets | |
+---------------------+------------------------------------------+------------------------------------------------------+
| 4. Instance | No need of creating instance group. | Managed / UnManaged Instance group |
| Group | Target pools need to be created. | is necessary for creating HTTP / HTTPS |
| | Instance can be just tagged to the pool. | load balancer. |
| | Ideal for unmanaged instance group | |
| | where instances are non homogeneous. | |
+---------------------+------------------------------------------+------------------------------------------------------+
| 5. Workflow | Forwarding rules is the starting point. | This is quite complex in HTTP(s) load balancer. |
| | It directs the request to the | Global forwarding rulesroutes direct the request |
| | target pools from which compute | to target HTTP proxy, which in turn checks the |
| | engines will pick the request. | URL map to determine appropriate backend |
| | | services. These services in turn direct the request |
| | Forwarding rules -> target pool | to the instance group. |
| | -> instances | |
| | | |
| | | Global forwarding rules -> Target HTTP proxy -> |
| | | URL map -> Backend Sevices -> instance group |
+---------------------+------------------------------------------+------------------------------------------------------+
| 6. Types of | Basic network load balancer which | 1. Cross-region load balancer uses only one |
| load balancer | directs the request based on IP address, | global IP address and routes the request |
| | port and the protocol within the region. | to the nearest region. |
| | | |
| | | 2. Content-based load balancer is based |
| | | on the URL path. Different path rules need |
| | | different backend services. for eg: /video |
| | | and /static require two separate backend services. |
+---------------------+------------------------------------------+------------------------------------------------------+
| 7. Session affinity | Session affinity can be set, but only | 1. Client IP Affinity: This directs the same |
| | during the creation of target pool. | client ip to same backend instance by |
| | Once it is set, the value | computing hash of the IP. |
| | cannot be changed. | 2. Generated Cookie Affinity: Load balancer stores |
| | | cookie in clients and directs the same client to |
| | | same instance with the help of retrieved cookie. |
+---------------------+------------------------------------------+------------------------------------------------------+
| 8. Health check | Health check is optional, but network | Health can be verified by either using HTTP |
| | load balancing relies on HTTP Health | heath check or HTTPS health check. |
| | checks for determining instance health. | |
+---------------------+------------------------------------------+------------------------------------------------------+
The above table is based on my perspective. If anything is incorrect or If I had missed something, please feel free to comment and I will add it to the table.
Here is the link for instructions on setting up an HTTP load balancer in GCP.
In general below is the difference between Network and Http load balancers.
Network Load balancer (layer 4):
This is the distribution of traffic based on network variables, such as IP address and destination ports. It is layer 4 (TCP) and below and is not designed to take into consideration anything at the application layer such as content type, cookie data, custom headers, user location, or the application behavior. It is context-less, caring only about the network-layer information contained within the packets it is directing this way and that.
Application load balancer (Layer 7)
This is the distribution of requests based on multiple variables, from the network layer to the application layer. It is context-aware and can direct requests based on any single variable as easily as it can a combination of variables. Applications are load balanced based on their peculiar behavior and not solely on server (operating system or virtualization layer) information.Provides the ability to route HTTP and HTTPS traffic based upon rules, host based or path based. Like an NLB, each Target can be on different ports.
The other difference between the two is important because network load balancing cannot assure availability of the application. This is because it bases its decisions solely on network and TCP-layer variables and has no awareness of the application at all. Generally a network load balancer will determine “availability” based on the ability of a server to respond to ICMP ping, or to correctly complete the three-way TCP handshake. An application load balancer goes much deeper, and is capable of determining availability based on not only a successful HTTP GET of a particular page but also the verification that the content is as was expected based on the input parameters.
Ref : https://medium.com/awesome-cloud/aws-difference-between-application-load-balancer-and-network-load-balancer-cb8b6cd296a4
In addition, I would like to mention there are 3 main aspects to consider in choosing the correct Load Balancer (LB) in GCP:
1) Global versus regional
2) External versus internal
3) Traffic type
Please find more information on this chart as well.