Is it possible to secure a load balanced endpoint in Azure? - web-services

Is there any way I can have a load balanced endpoint that does not get exposed publicly in Azure?
My scenario is I have an endpoint running on multiple VM's. I can create a load balanced endpoint, but this creates a publicly available endpoint.
I only want my load balanced endpoint to be available for my web applications running in Azure (Web Workers and Azure Websites).
Is there any way to do this?

As #Brent pointed out, you can set up ACL's on Virtual Machine endpoints. One thing you mentioned in your question was the ability to restrict inbound traffic to only your web/worker role instances and Web Sites traffic.
You can certainly restrict traffic to web/worker instances, as each cloud service gets an IP address, so you just need to allow that particular IP address. Likewise, you can use ACLS to restrict traffic to other Virtual Machine deployments (especially in the case where you're not using a Virtual Network). Web Sites, on the other hand, don't offer a dedicated outbound IP address, so you won't be able to use ACLs to manage Web Sites traffic to your Virtual Machines.

Yes, Windows Azure IaaS supports ACL's on endpoints. Using this feature, you can restrict who connects to your load balanced endpoints. For more information see: https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-acl/

Related

Allow traffic from certain machines - Google Cloud Armor

I have a Google Cloud Run services and i would need to allow traffic from certain machine only.
I use Google cloud armor to allow IPs to access the Cloud Run service.
I have problem in adding dynamic IPs of certain machine as it keeps changing. I also searched on adding mac address to allow, but Cloud armor does not have that feature.
You cannot use MAC addresses for the Internet. The service (Cloud Armor) will never see the client's MAC address, only the MAC address of the last router (which would be a Google router). Google Cloud VPCs do not expose layer 2 information.
Cloud Run is a public service with a public URL. Restricting traffic based upon IP address is not supported by Cloud Run. You can put an HTTP Load Balancer and Cloud Armor in front, but that would not prevent traffic that goes directly to the service.
There are much better techniques to control access to public services. Google Cloud implements authorization using OAuth via Identity Aware Proxy (IAP). That is the correct method to use. Given that your clients have changing IP addresses, that is your best solution.
If I needed access control based upon IP address, I would run my service on Compute Engine using either Container Optimized OS, Docker or just natively using Apache/Nginx. You can dynamically update VPC firewall rules as the client's IP address changes with custom code.

Can we restrict websites access on AWS Workspaces?

I am trying to setup AWS Workspaces and all works fine. I also have a requirement to restrict certain websites like Google Drive, Dropbox, etc. on my AWS instance. How can I add these web access restrictions? Is it possible to configure and reply AWS firewall thru which these restrictions are applied?
Any help/suggestions will be highly appreciated.
There are multiple ways to achieve that.
You can use some endpoint protection that allow web filtering e.g. Sophos, Trend Micro, ...
You can use a firewall appliance that allows to control the web traffic.
https://aws.amazon.com/marketplace/search/results?x=0&y=0&searchTerms=firewall
You can use the new AWS Network Firewall to control the web traffic.
From: AWS Network Firewall Features
AWS Network Firewall supports inbound and outbound web filtering for unencrypted web traffic. For encrypted web traffic, Server Name Indication (SNI) is used for blocking access to specific sites. SNI is an extension to Transport Layer Security (TLS) that remains unencrypted in the traffic flow and indicates the destination hostname a client is attempting to access over HTTPS. In addition, AWS Network Firewall can filter fully qualified domain names (FQDN).

AWS ECS Private and Public Services

I have a scenario where I have to deploy multiple micro-services on AWS ECS. I want to make services able to communicate with each other via APIs developed in each micro-service. I want to deploy the front-end on AWS ECS as well that can be accessed publicly and can also communicate with other micro-services deployed on AWS ECS. How can I achieve this? Can I use AWS ECS service discovery by having all services in a private subnet to enable communication between each of them? Can I use Elastic Load Balancer to make front-end micro-service accessible to end-users over the internet only via HTTP/HTTPS protocols while keeping it in a private subnet?
The combination of both AWS load balancer ( for public access) and Amazon ECS Service Discovery ( for internal communication) is the perfect choice for the web application.
Built-in service discovery in ECS is another feature that makes it
easy to develop a dynamic container environment without needing to
manage as many resources outside of your application. ECS and Route 53
combine to provide highly available, fully managed, and secure service
discovery
Service discovery is a technique for getting traffic from one container to another using the containers direct IP address, instead of an intermediary like a load balancer. It is suitable for a variety of use cases:
Private, internal service discovery
Low latency communication between services
Long lived bidirectional connections, such as gRPC.
Yes, you can use AWS ECS service discovery having all services in a private subnet to enable communication between them.
This makes it possible for an ECS service to automatically register
itself with a predictable and friendly DNS name in Amazon Route 53. As
your services scale up or down in response to load or container
health, the Route 53 hosted zone is kept up to date, allowing other
services to lookup where they need to make connections based on the
state of each service.
Yes, you can use Load Balancer to make front-end micro-service accessible to end-users over the internet. You can look into this diagram that shows AWS LB and service discovery for a Web application in ECS.
You can see the backend container which is in private subnet, serve public request through ALB while rest of the container use AWS service discovery.
Amazon ECS Service Discovery
Let’s launch an application with service discovery! First, I’ll create
two task definitions: “flask-backend” and “flask-worker”. Both are
simple AWS Fargate tasks with a single container serving HTTP
requests. I’ll have flask-backend ask worker.corp to do some work and
I’ll return the response as well as the address Route 53 returned for
worker. Something like the code below:
#app.route("/")
namespace = os.getenv("namespace")
worker_host = "worker" + namespace
def backend():
r = requests.get("http://"+worker_host)
worker = socket.gethostbyname(worker_host)
return "Worker Message: {]\nFrom: {}".format(r.content, worker)
Note that in this private architecture there is no public subnet, just a private subnet. Containers inside the subnet can communicate to each other using their internal IP addresses. But they need some way to discover each other’s IP address.
AWS service discovery offers two approaches:
DNS based (Route 53 create and maintains a custom DNS name which
resolves to one or more IP addresses of other containers, for
example, http://nginx.service.production Then other containers can
send traffic to the destination by just opening a connection using
this DNS name)
API based (Containers can query an API to get the list of IP address
targets available, and then open a connection directly to one of the
other container.)
You can read more about AWS service discovery and use cases amazon-ecs-service-discovery and here
According to the documentation, "Amazon ECS does not support registering services into public DNS namespaces"
In other words, when it registers the DNS, it only uses the service's private IP address which would likely be problematic. The DNS for the "public" services would register to the private IP addresses which would only work, for example, if you were on a VPN to the private network, regardless of what your subnet rules were.
I think a better solution is to attach the services to one of two load balancers... one internet facing, and one internal. I think this works more naturally for scaling the services up anyway. Service discovery is cool, but really more for services talking to each other, not for external clients.
I want to deploy the front-end on AWS ECS as well that can be accessed publicly and can also communicate with other micro-services deployed on AWS ECS.
I would use Service Discovery to wire the services internally and the Elastic Load Balancer integration to make them accessible for the public.
The load balancer can do the load balancing on one side and the DNS SRV records can do the load balancing for your APIs internally.
There is a similar question here on Stack Overflow and the answer [1] to it outlines a possible solution using the load balancer and the service discovery integrations in ECS.
Can I use Elastic Load Balancer to make front-end micro-service accessible to end-users over the internet only via HTTP/HTTPS protocols while keeping it in a private subnet?
Yes, the load balancer can register targets in a private subnet.
References
[1] https://stackoverflow.com/a/57137451/10473469

How to expose APIs endpoints from private AWS ALB

We are having several microservices on AWS ECS. We have single ALB which has different target group for different microservices. We want to expose some endpoints externally while some endpoints just for internal communication.
The problem is that if we put our load balancer in public VPC than it means that we are exposing all register endpoints externally. If we move load balancer to private VPC, we have to use some sort of proxy in public VPC, which required additional infra/cost and custom implementation of all security concerns like D-DOS etc.
What possible approaches we can have or does AWS provide some sort of out of the box solution for this ?
I would strongly recommend running 2 albs for this. Sure, it will cost you more (not double because the traffic costs won't be doubled), but it's much more straight forward to have an internal load balancer and an external load balancer. Work hours cost money too! Running 2 albs will be the least admin and probably the cheapest overall.
Checkout WAF. It stands for web application firewall and is available as AWS service. Follow these steps as guidance:
Create a WAF ACL.
Add "String and regex matching" condition for your private endpoints.
Add "IP addresses" condition for your IP list/range that are allowed to access private endpoints.
Create a rule in your ACL to Allow access if both conditions above are met.
Assign ALB to your WAF ACL.
UPDATE:
In this case you have to use external facing ALB in a public subnet as mentioned by Dan Farrell in comment below.
I would suggest doing it like this:
one internal ALB
one target group per microservice, as limited by ECS.
one Network load balancer(NLB), with one ip based target group.
The Ip based target group will have the internal ALB ip addresses,as the private ip addresses for ALB are not static, you will need to setup cloudwatch cron rule with this lambda function(forked from aws documentation and modified to work on public endpoints as well):
https://github.com/talal-shobaita/populate-nlb-tg-withalb/
Both ALB and NLB are scalable and protected from DDOS by AWS, AWS WAF is another great tool that can be attached directly to your ALB listener for extended protection.
Alternatively, you can wait for AWS to support multiple target group registration per service, it is already in their roadmap:
https://github.com/aws/containers-roadmap/issues/104
This how we eventually solved.
Two LB one in private and one in public subnet.
Some APIs meant to be public, so directly exposed through public LB.
For some private APIs endpoints need to be exposed, added a proxy in public LB and routed those particular paths from public LB to private LB through this proxy.
These days API Gateway is the best way to do this. You can have your API serve a number of different endpoints while serving only the public ones via API Gateway and proxying back to the API.
I don't see it mentioned yet so I'll note that we use a CloudMap for internal routing and an ALB for "external" (in our case simply intra/inter-VPC) communication. I didn't read in depth, but I think this article describes it.
AWS Cloud Map is a managed solution that lets you map logical names to the components/resources for an application. It allows applications to discover the resources using one of the AWS SDKs, RESTful API calls, or DNS queries. AWS Cloud Map serves registered resources, which can be Amazon DynamoDB tables, Amazon Simple Queue Service (SQS) queues, any higher-level application services that are built using EC2 instances or ECS tasks, or using a serverless stack.
...
Amazon ECS is tightly integrated with AWS Cloud Map to enable service discovery for compute workloads running in ECS. When you enable service discovery for ECS services, it automatically keeps track of all task instances in AWS Cloud Map.
You want to look at AWS Security Groups.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.
Even more specific to your use-case though might be their doc on ELB Security Groups. These are, as you may expect, security groups that are applied at the ELB level rather than the Instance level.
Using security groups, you can specify who has access to which endpoints.

Secure REST server on EC2 instance

I have a Python server (basic REST API) running on an AWS EC2 instance. The server supplies the data for a mobile application. I want my mobile app to connect to the python server securely over HTTPS. What is the easiest way that I can do this?
Thus far, I've tried setting up an HTTP/HTTPS load balancer with an Amazon certificate, but it seems that the connection between the ELB and the EC2 instance would still not be totally secure (HTTP in a VPC).
When you are securing access to an REST API in an EC2 instance, there are several considerations you need to look upon.
Authentication & Authorization.
Monitoring of API calls.
Load balancing & life cycle management.
Throttling.
Firewall rules.
Secure access to the API.
Usage information by consumers & etc.
Several considerations are mandatory to secure a REST API such as
Having SSL for communication (Note: Here SSL termination at AWS Load Balancer Level is accepted, since there onwards, the traffic goes within the VPC and also can be hardened using Security Groups.)
If you plan on getting most of the capabilities around REST APIs stated above, I would recommend to proxy your service in EC2 to AWS API Gateway which will provide most of the capabilities out of the box.
In addition you can configure AWS WAF for additional security at Load Balancer(Supports AWS Application Load Balancer).
You can leverage some of the AWS Services to Handle these.
Question answered in the comments.
It's fine to leave traffic between ELB and EC2 unencrypted as long as they are in the same VPC and the security group for the EC2 instance(s) is properly configured.