AWS keeping services on domain instead of having public - amazon-web-services

I have hosted few services on AWS however all are public and can be accessed from anywhere which is a security threat, could you please let me know how to keep the services specific to internal users of organization without any authentication medium.

I found a workaround for this, if you have list of IP range may be a network administrator can help you, take that and put them in load balancers under security group.

You should spend some time reviewing security recommendations on AWS. Some excellent sources are:
Whitepaper: AWS Security Best Practices
AWS re:Invent 2017: Best Practices for Managing Security Operations on AWS (SID206) - YouTube
AWS re:Invent 2017: Security Anti-Patterns: Mistakes to Avoid (FSV301) - YouTube
AWS operates under a [Shared Responsibility Model, which means that AWS provides many security tools and capabilities, but it is your responsibility to use them correctly!
Basic things you should understand are:
Put public-facing resources in a Public Subnet. Everything else should go into a Private Subnet.
Configure Security Groups to only open a minimum number of ports and IP ranges to the Internet.
If you only want to open resources to "internal users of organization without any authentication medium", then you should connect your organization's network to AWS via AWS Direct Connect (private fiber connection) or via an encrypted VPN connection.
Security should be your first consideration in everything you put in the cloud — and, to be honest, everything you put in your own data center, too.

Consider a LEAST PRIVILEGE approach when planning Network VPC Architecture, NACL and Firewall rules as well as IAM Access & S3 Buckets.
LEAST PRIVILEGE: Configure the minimum permission and Access required in IAM,Bucket Policies, VPC Subnets, Network ACL and Security Groups with a need to know White-list approach.
Start from having specific VPCs with 2 Main Segments of Networks 1-Public and the other 2-Private.
You will place your DMZ components on the Public segment,
Components such as Internet Facing Web Server, load Balancers,
Gateways, etc falls here.
For the Rest such as Applications, Data, or Internal Facing
LoadBalancers or WebServers make sure you place them in the Private
Subnet where you will use an Internal IP address from specified
Internal Range to refer to the Components Inside the VPC.
If you have Multiple VPCs and you want them to talk with each
other you can Peer them together.
You also can use Route53 Internal DNS to simplify naming.
Just in case, If you need to have Internet access from the Private segment
you can Configure a NAT Gateway on the public subnet and handle
Outgoing Traffic routed to Internet from the NAT Gateway.
S3 Buckets can be Configured and Servered as VPC-END points. (Routing via an Internal Network rather than Internet Routed to S3 Buckets/Object).
In IAM you can create Policies to whitelist source IP and attached to Roles and Users which is a great combination to Mix Network VPN Connections/white-listed IPs and keep Network Access in harmony with IAM. That means even Console Access could be governed by a White-listed Policy.

Related

aws network firewall conditional rules

I work for the platform team of my company and we have a central AWS Network Firewall in a central VPC. We provide AWS accounts for different teams and if they need internet access, we connect the VPCs of the teams with a transit gateway to our central VPC and route traffic through our central VPC and firewall into the internet.
We currently only allow to reach certain domains that we whitelist. The problem is, that if we whitelist a URL, every AWS account can reach this URL. Sometimes an aws account needs to reach just one endpoint and not every endpoint, that is whitelisted.
My question is, is it possible to use some kind of conditional rules that only apply to certain accounts/VPCs?
We use Cloudformation for IaC. Any help and examples are highly appreciated!
Cheers

AWS Private Link vs VPC Endpoint

What is the difference between Private Link and VPC endpoint? As per the documentation it seems like VPC endpoint is a gateway to access AWS services without exposing the data to internet. But the definition about AWS private link also looks similar.
Reference Link:
https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-services-overview.html
Does Private Link is the superset of VPC endpoint?
It would be really helpful if anyone provides the difference between these two with examples!
Thanks in Advance!
AWS defines them as:
VPC endpoint — The entry point in your VPC that enables you to connect privately to a service.
AWS PrivateLink — A technology that provides private connectivity between VPCs and services.
So PrivateLink is technology allowing you to privately (without Internet) access services in VPCs. These services can be your own, or provided by AWS.
Let's say that you've developed some application and you are hosting it in your VPC. You would like to enable access to this application to services in other VPCs and other AWS users/accounts. But you don't want to setup any VPC peering nor use Internet for that. This is where PrivateLink can be used. Using PrivateLink you can create your own VPC endpoint services which will enable other services to use your application.
In the above scenario, VPC interface endpoint is a resource that users of your application would have to create in their VPCs to connect to your application. This is same as when you create VPC interface endpoint to access AWS provided services privately (no Internet), such as Lambda, KMS or SMS.
There are also Gateway VPC endpoints which is older technology, replaced by PrivateLink. Gateways can only be used to access S3 and DynamoDB, nothing else.
To sum up, PrivateLink is general technology which can be used by you or AWS to allow private access to internal services. VPC interface endpoint is a resource that the users of such VPC services create in their own VPCs to interact with them.
Suppose there is a website xyz.com that I am hosting in a bunch of Ec2 instances, exposed to the outside world thru a Network load balancer.
Now, a client who has his/her own AWS account, wants to access this xyz.com from an Ec2 running in their aws account.
One approach is to go thru the Internet.
However the client wants to avoid the internet route.
He/she wants to use the AWS backbone to reach xyz.com.
The technology that enables that, is AWS Private link.
(note that if you search for Private Link in the AWS services, there will be none.
You will get "End point services" as the closest hit)
So, this is how to route traffic through the AWS backbone:
I, the owner of xyz.com, will create a VPC End Point Service (NOTE the keyword Service here)
The VPC End point service will point to my Network load balancer.
I will then give my VPC End point service name to the client.
The client will create a VPC End Point (NOTE.. this is different from #1).
While creating it, the client will specify the VPC End Point Service name (from #1) that he got from me.
I can choose to be prompted to accept the connection from the client to my VPC End point service.
As soon as I accept it, then the client can reach xyz.com from his/her EC2 instance.
There is no Internet, no direct connect or VPN.. this simply works; and its secure.
And which technology enabled it.. AWS Private link !!!
PRIVATE LINK IS THE ONLY TECHNOLOGY THAT ALLOWS 2 VPCS TO CONNECT THAT HAVE OVERLAPPING CIDR RANGES.
A useful way in understanding differences is in how they technically connect private resources to public services.
Gateway Endpoints route traffic by adding prefix lists within a VPC route table which targets the Gateway endpoint. It is a logical gateway object similar to a Internet Gateway.
In contrast, an Interface Endpoint uses Privatelink to inject into a VPC at the subnet level, via an Elastic Network Interface (ENI), giving network interface functionality, and therefore, DNS and private IP addressing as a means to connect to AWS public services, rather than simply being routed to it.
The differences in connections offer differing advantages and disadvantages (availability, resiliency, access, scalability, and etc), which then dictates how best to connect private resources to public services.
Privatelink is simply a very much abstracted technology to allow a more simplified connection by using DNS. The following AWS re:Invent offers a great overview of Privatelink: https://www.youtube.com/watch?v=abOFqytVqBU
As you correctly mentioned in the question that both VPC endpoint and AWS private link do not expose to internet. On AWS console under VPC, there is a clear option available to create an endpoint. But there is no option/label to create AWS private link. Actually, there is one more option/label called endpoint service. Creating endpoint service is one way to establish AWS private link. At one side of this AWS private link is your endpoint service and at the other side is your endpoint itself. And interestingly we create both these sides in two different VPCs. In other words, you are connecting two VPCs with this private link (instead of using internet or VPC peering).
understand like,
VPC1 got endpoint service ----> private link -----> VPC2 got endpoint
Here endpoint service side is service provider while endpoint is service consumer. So when you have some service (may be some application or s/w) that you think other VPC endpoints can consume you create endpoint service at your end and consumers will create endpoints at there end. When consumers create endpoints at their end they have to give/select your service name and thus private link will be established with your service.
Ultimately you can have multiple consumers of your service just like one to many relationship.

Alternative to AWS's Security groups in GCP?

Is there an alternative to AWS's security groups in the Google Cloud Platform?
Following is the situation which I have:
A Basic Node.js server running in Cloud Run as a docker image.
A Postgres SQL database at GCP.
A Redis instance at GCP.
What I want to do is make a 'security group' sort of so that my Postgres SQL DB and Redis instance can only be accessed from my Node.js server and nowhere else. I don't want them to be publically accessible via an IP.
What we do in AWS is, that only services part of a security group can access each other.
I'm not very sure but I guess in GCP I need to make use of Firewall rules (not sure at all).
If I'm correct could someone please guide me as to how to go about this? And if I'm wrong could someone suggest the correct method?
GCP has firewall rules for its VPC that work similar to AWS Security Groups. More details can be found here. You can place your PostgreSQL database, Redis instance and Node.js server inside GCP VPC.
Make Node.js server available to the public via DNS.
Set default-allow-internal rule, so that only the services present in VPC can access each other (halting public access of DB and Redis)
As an alternative approach, you may also keep all three servers public and only allow Node.js IP address to access DB and Redis servers, but the above solution is recommended.
Security groups inside AWS are instance-attached firewall-like components. So for example, you can have a SG on an instance level, similar to configuring IP-tables on regular Linux.
On the other hand, Google Firewall rules are more on a Network level. I guess, for the level of "granularity", I'd say that Security Groups can be replaced to instance-level granularity, so then your alternatives are to use one of the following:
firewalld
nftables
iptables
The thing is that in AWS you can also attach security groups to subnets. So SG's when attached to subnets, are also kind of similar to google firewalls, still, security groups provide a bit more granularity since you can have different security groups per subnet, while in GCP you need to have a firewall per Network. At this level, protection should come from firewalls in subnets.
Thanks #amsh for the solution to the problem. But there were a few more things that were required to be done so I guess it'll be better if I list them out here if anyone needs in the future:
Create a VPC network and add a subnet for a particular region (Eg: us-central1).
Create a VPC connector from the Serverless VPC Access section for the created VPC network in the same region.
In Cloud Run add the created VPC connector in the Connection section.
Create the PostgreSQL and Redis instance in the same region as that of the created VPC network.
In the Private IP section of these instances, select the created VPC network. This will create a Private IP for the respective instances in the region of the created VPC network.
Use this Private IP in the Node.js server to connect to the instance and it'll be good to go.
Common Problems you might face:
Error while creating the VPC Connector: Ensure the IP range of the VPC connector and the VPC network do not overlap.
Different regions: Ensure all instances are in the same region of the VPC network, else they won't connect via the Private IP.
Avoid changing the firewall rules: The firewall rules must not be changed unless you need them to perform differently than they normally do.
Instances in different regions: If the instances are spread across different regions, use VPC network peering to establish a connection between them.

How to expose APIs endpoints from private AWS ALB

We are having several microservices on AWS ECS. We have single ALB which has different target group for different microservices. We want to expose some endpoints externally while some endpoints just for internal communication.
The problem is that if we put our load balancer in public VPC than it means that we are exposing all register endpoints externally. If we move load balancer to private VPC, we have to use some sort of proxy in public VPC, which required additional infra/cost and custom implementation of all security concerns like D-DOS etc.
What possible approaches we can have or does AWS provide some sort of out of the box solution for this ?
I would strongly recommend running 2 albs for this. Sure, it will cost you more (not double because the traffic costs won't be doubled), but it's much more straight forward to have an internal load balancer and an external load balancer. Work hours cost money too! Running 2 albs will be the least admin and probably the cheapest overall.
Checkout WAF. It stands for web application firewall and is available as AWS service. Follow these steps as guidance:
Create a WAF ACL.
Add "String and regex matching" condition for your private endpoints.
Add "IP addresses" condition for your IP list/range that are allowed to access private endpoints.
Create a rule in your ACL to Allow access if both conditions above are met.
Assign ALB to your WAF ACL.
UPDATE:
In this case you have to use external facing ALB in a public subnet as mentioned by Dan Farrell in comment below.
I would suggest doing it like this:
one internal ALB
one target group per microservice, as limited by ECS.
one Network load balancer(NLB), with one ip based target group.
The Ip based target group will have the internal ALB ip addresses,as the private ip addresses for ALB are not static, you will need to setup cloudwatch cron rule with this lambda function(forked from aws documentation and modified to work on public endpoints as well):
https://github.com/talal-shobaita/populate-nlb-tg-withalb/
Both ALB and NLB are scalable and protected from DDOS by AWS, AWS WAF is another great tool that can be attached directly to your ALB listener for extended protection.
Alternatively, you can wait for AWS to support multiple target group registration per service, it is already in their roadmap:
https://github.com/aws/containers-roadmap/issues/104
This how we eventually solved.
Two LB one in private and one in public subnet.
Some APIs meant to be public, so directly exposed through public LB.
For some private APIs endpoints need to be exposed, added a proxy in public LB and routed those particular paths from public LB to private LB through this proxy.
These days API Gateway is the best way to do this. You can have your API serve a number of different endpoints while serving only the public ones via API Gateway and proxying back to the API.
I don't see it mentioned yet so I'll note that we use a CloudMap for internal routing and an ALB for "external" (in our case simply intra/inter-VPC) communication. I didn't read in depth, but I think this article describes it.
AWS Cloud Map is a managed solution that lets you map logical names to the components/resources for an application. It allows applications to discover the resources using one of the AWS SDKs, RESTful API calls, or DNS queries. AWS Cloud Map serves registered resources, which can be Amazon DynamoDB tables, Amazon Simple Queue Service (SQS) queues, any higher-level application services that are built using EC2 instances or ECS tasks, or using a serverless stack.
...
Amazon ECS is tightly integrated with AWS Cloud Map to enable service discovery for compute workloads running in ECS. When you enable service discovery for ECS services, it automatically keeps track of all task instances in AWS Cloud Map.
You want to look at AWS Security Groups.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.
Even more specific to your use-case though might be their doc on ELB Security Groups. These are, as you may expect, security groups that are applied at the ELB level rather than the Instance level.
Using security groups, you can specify who has access to which endpoints.

Restrict access to objects in S3

I would like to restrict access to objects stored in an Amazon S3 bucket.
I would like to allow all the users on our LAN (they may or may not have amazon credentials since the entire infrastructure is not on AWS). I have seen some discussion around IP address filtering and VPC endpoint. Can someone please help me here? I am not sure if I can use VPC endpoint since all users on our lan are not in Amazon VPC.
Is this possible?
Thanks
Most likely your corporate LAN uses static IP addresses. You can create S3 policies to allow access (or deny) based upon IP addresses. Here is a good AWS article on this:
Restricting Access to Specific IP Addresses
VPC Endpoints are for VPC to AWS Services connectivity (basically using Amazon's private Internet instead of the public Internet. VPC Endpoints won't help you with Corporate connectivity (except if you are using Direct Connect).
Here is how I would solve it,
Configure
Configure Users from a corporate directory who use identity federation with SAML.
Create Groups
Apply Policies to Group
This will give fine-grained control and less maintenance overhead.
This will help you not only to control S3 but any future workloads you migrate to AWS and permissions to those resources as well.
IP based filtering are prone to security risk and with high maintenance in the long run and not scalable.
EDIT:
Adding more documentation to do the above,
Integrating ADFS with AWS IAM:
https://aws.amazon.com/blogs/security/enabling-federation-to-aws-using-windows-active-directory-adfs-and-saml-2-0/
IAM Groups:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_create.html