On-Premises equivalent of VPC - amazon-web-services

A lot of examples for AWS and I'm confident others, use the AWS concept of a VPC in part to give some degree of security. The idea being that it can be set up to only allow traffic from certain ports and even certain IP addresses. What it does however give, is a zone of defined traffic.
What did people do for on premise installations before the cloud? Somewhere someone is probably even doing something still on their own computer network.

An Amazon VPC is a virtualized network.
Traditional physical networks consist of routers, switches and firewalls.
Even physical networks use virtualized networks, providing "VLANS" such as Production, Testing and Development virtual networks all across the same physical network.
An Amazon VPC maps very closely to "real-world" networks, except that they are easier to configure and don't require any cabling. VPC maintains the concepts of public and private subnets, route tables and inter-network connections.
Once capability of an Amazon VPC that does not exist in physical networks is the concept of a Security Group, which is a firewall for each individual resources. Traditional networks use firewall devices to restrict traffic travelling between subnets (similar to VPC NACLs), but Security Groups add firewall function at the resource-level, such as on an Amazon EC2 instance or Amazon RDS database. This adds considerably more security capabilities that available in normal network.
Amazon VPCs can also be deployed via API calls or AWS CloudFormation templates, allowing a whole network to be deployed from a script. This is similar to what can be accomplished with VMware virtual networks.

Related

Bastion Server for all AWS instances

i have more than 30 production Windows severs in all AWS regions. I would like to connect all servers from one base bastion host. can any one please let me know which one is good choice? How can i setup one bastion host to communicate all servers which is different regions and different VPC's? Kindly anyone give advice for this?
First of all, I would question what are you trying to achieve with a single bastion design? For example, if all you want is to execute automation commands or patches it would be significantly more efficient (and cheaper) to use AWS System Manager Run Commands or AWS System Manager Patch Manager respectively. With AWS System Manager you are getting a managed service that offers advance management capabilities with highest security principles built-in. Additionally, with SSM almost all access permissions could be regulated via IAM permission policies.
Still, if you need to set-up bastion host for some other purpose not supported by SSM, the answer includes several steps that you need to do.
Firstly, since you are dealing with multiple VPCs (across regions), one way to connect them all and access them from you bastion's VPC would be to set-up a Inter-Region Transit Gateway. However, among other, you would need to make sure that your VPC (and Subnet) CIDR ranges are not overlapping. Otherwise, it will be impossible to properly arrange routing tables.
Secondly, you need to arrange that access from your bastion is allowed in the inbound connections of your target's security group. Since, you are dealing with peered VPCs you will need to properly allow your inbound access based on CIDR ranges.
Finally, you need to decide how you will secure access to your Windows Bastion host. Like with almost all use-cases relying on Amazon EC2 instances, I would stress to keep all the instances in private subnets. From private subnets you can always reach the internet via NAT Gateways (or NAT Instances) and stay protected from unauthorized external access attempts. Therefore, if your Bastion is in private subnet you could use the capability of SSM to establish a port-forwarding session to your local machine. In this way, you enable yourself the connection while even your bastion is secured in private subnet.
Overall, this answer to your question involves a lot of complexity and components that will definitely incur charges to your AWS account. So, it would be wise to consider what practical problem are you trying to solve (not shared in the question)? Afterwards, you could evaluate if there is an applicable managed service like SSM that is already provided by AWS. In the end, from a security perspective, granting access to all instances from a single bastion might not be best practice. If you consider scenarios in which you bastion is compromised for whatever reason, you basically compromised all of your instances across all of the regions.
Hope it gives you slightly better understanding of your potential solution.

CIDR overlap between AWS VPC and on-prem network

I need to connect an AWS VPC with an on-prem network. Due to a limited CIDR range, I need to choose between Transit Gateway and my preferred account/VPC architecture. Looking for advice.
I'm designing an AWS environment to run some apps for a Big Company.
Big Company has a big on-prem network -- so big (or so badly partitioned) that they can only spare a /25 CIDR range (128 addresses)
My apps need to send/receive data to/from systems on the on-prem network.
They typically use Transit Gateway to connect between on-prem and AWS VPCs.
For management/maintainability/sanity reasons, I would like to separate the dev, staging and prod versions of my app into separate accounts (or at least VPCs). There are a lot of different AWS components involved in each environment.
Questions:
If the CIDRs provided are too small for my environments, is it possible to use some kind of NAT setup?
I understand this is not possible with Transit Gateway. What might we use instead?
Is it worth complicating the network setup to achieve dev/stage/prod separation?

Designing highly available VPCs

I have been reading and watching everything [1] I can related to designing highly available VPCs. I have a couple of questions. For a typical 3-tier application (web, app, db) that needs HA within a single region it looks like you need to do the following:
Create one public subnet in each AZ.
Create one web, app, and db private subnet in each AZ.
Ensure your web, app and db EC2 instances are split evenly between AZs (for this post assume the DBs are running hot/hot and the apps are stateless).
Use an ALB / autoscaling to distribute load across the web tier. From what I read ALBs provide HA across AZs within the same region.
Utilize Internet gateways to provide a target route for Internet traffic.
Use NAT gateways to SRC NAT the private subnet VMs so they can get out to
the Internet.
With this approach do you need to deploy one Internet and NAT gateway to each AZ? If you only deploy one what happens when you have an AZ outage. Are these services AZ aware (can't find a good answer for this question). Any and all feedback (glad to RTFM) is welcomed!
Thank you,
- Mick
[1] Last two resources I reviewed
Deploying production grade VPCs
High Availability Application Architectures in Amazon VPC
You need NAT Gateway in each AZ as the redundancy is limited to a single AZ. Here is the snippet from the official documentation
Each NAT gateway is created in a specific Availability Zone and
implemented with redundancy in that zone.
You need just a single Internet gateway for a VPC as it is redundant across AZs and a VPC level resource. Here is the snippet from Internet Gateway offical documentation
An internet gateway is a horizontally scaled, redundant, and highly
available VPC component that allows communication between instances in
your VPC and the internet. It therefore imposes no availability risks
or bandwidth constraints on your network traffic.
Here is a highly available architecture image showing NAT GW per AZ and Internet GW as a VPC resource
Image source: https://aws.amazon.com/quickstart/architecture/vpc/

Why can't we implement multiple network interface in a single VPC network on GCP? Where as it is possible in AWS and Azure

Why can't we implement multiple network interface on a single VPC (Which has multiple subnets) in GCP? Where as it is possible in AWS and Azure.
I came across with a problem where I had to implement multiple network interface in GCP. But in my case, all the subnets was present in the single VPC network, I read GCP documentation and got to know that, in GCP it is not possible to have multiple network interface in a single VPC network, in order to implement multiple network interface, all the subnets must be in a different VPC network, where as its completely opposite in AWS and Azure.
In AWS - all network interface must be available in the same VPC, and cannot at network interface from other VPC network.
In Azure vNet - all network interface must be available in the same VPC, and cannot at network interface from other vNet.
Of course, VPC in google cloud is little different from AWS, as an example, Azure vNet and AWS VPC's are regional in nature where as in GCP it is global in nature. And there are several other difference as well.
Was just curious to know about this limitation in GCP which I got.
Your assumption is wrong. You cannot attach more than one network interface to the same subnet, but you can to different subnets in the same VPC.

AWS keeping services on domain instead of having public

I have hosted few services on AWS however all are public and can be accessed from anywhere which is a security threat, could you please let me know how to keep the services specific to internal users of organization without any authentication medium.
I found a workaround for this, if you have list of IP range may be a network administrator can help you, take that and put them in load balancers under security group.
You should spend some time reviewing security recommendations on AWS. Some excellent sources are:
Whitepaper: AWS Security Best Practices
AWS re:Invent 2017: Best Practices for Managing Security Operations on AWS (SID206) - YouTube
AWS re:Invent 2017: Security Anti-Patterns: Mistakes to Avoid (FSV301) - YouTube
AWS operates under a [Shared Responsibility Model, which means that AWS provides many security tools and capabilities, but it is your responsibility to use them correctly!
Basic things you should understand are:
Put public-facing resources in a Public Subnet. Everything else should go into a Private Subnet.
Configure Security Groups to only open a minimum number of ports and IP ranges to the Internet.
If you only want to open resources to "internal users of organization without any authentication medium", then you should connect your organization's network to AWS via AWS Direct Connect (private fiber connection) or via an encrypted VPN connection.
Security should be your first consideration in everything you put in the cloud — and, to be honest, everything you put in your own data center, too.
Consider a LEAST PRIVILEGE approach when planning Network VPC Architecture, NACL and Firewall rules as well as IAM Access & S3 Buckets.
LEAST PRIVILEGE: Configure the minimum permission and Access required in IAM,Bucket Policies, VPC Subnets, Network ACL and Security Groups with a need to know White-list approach.
Start from having specific VPCs with 2 Main Segments of Networks 1-Public and the other 2-Private.
You will place your DMZ components on the Public segment,
Components such as Internet Facing Web Server, load Balancers,
Gateways, etc falls here.
For the Rest such as Applications, Data, or Internal Facing
LoadBalancers or WebServers make sure you place them in the Private
Subnet where you will use an Internal IP address from specified
Internal Range to refer to the Components Inside the VPC.
If you have Multiple VPCs and you want them to talk with each
other you can Peer them together.
You also can use Route53 Internal DNS to simplify naming.
Just in case, If you need to have Internet access from the Private segment
you can Configure a NAT Gateway on the public subnet and handle
Outgoing Traffic routed to Internet from the NAT Gateway.
S3 Buckets can be Configured and Servered as VPC-END points. (Routing via an Internal Network rather than Internet Routed to S3 Buckets/Object).
In IAM you can create Policies to whitelist source IP and attached to Roles and Users which is a great combination to Mix Network VPN Connections/white-listed IPs and keep Network Access in harmony with IAM. That means even Console Access could be governed by a White-listed Policy.