AWS Hybrid Cloud Load Balancing the traffic - amazon-web-services

I am new to the Cloud Hybrid Model and planning to use the public cloud only when the on premise doesn't have the capacity to handle the traffic.
1) How to handle the traffic to be served from AWS public cloud? Data would be present in on premise, only the application load has to be shared between on premise and public cloud.
2) If ans for question 1 is possible, how to load balance the trafic between on premise and public cloud?
3) How the DNS is managed, on premise DNS or rout353?

1) How to handle the traffic to be served from AWS public cloud? Data
would be present in on premise, only the application load has to be
shared between on premise and public cloud.
You are misunderstanding what Hybrid Cloud is. If your data is in your datacenter and is served from your datacenter, then you are on-prem. In your scenario, you would need to route the Internet traffic thru AWS to on-prem which increases cost and latency. AWS, in this case, is just an expensive data pipe. This example could increase fault-tolerance if on-prem public Internet fails and you have the correct router setup for failover.
For public hybrid cloud, you locate your data and services both in cloud and on-prem. Then you can load balance, failover, etc.
For private hybrid cloud, you are combining cloud resources with your datacenter resources for consumption either in cloud or on-prem or both at the same time. You can combine private hybrid cloud with public hybrid cloud.
The answer to #2 and #3 depends on what you have deployed on-prem and in the cloud and how traffic needs to be routed, isolated and protected.
In a typical environment, you would implement redundant routers with multiple connections to the Internet and to your cloud provider. These connections provide fault tolerance and routing. There are many options for setting up DNS which depends on the details of the implementation. You can combine Route 53 with on-prem DNS with DNS forwarders.

Related

Can I run Cloud Run applications on a private IP (inside dedicated VPC network)?

We have an app running on cloud run and it is authenticated only from API gateway.
But still cloud run has *.run.app public domain associated with it and seems like it can still be security issue for sensitive applications which deal with PII data.
How can we run the cloud run inside private VPC network so that private IP is assigned to it?
Is this a con for cloud run over GKE in terms of private VPC network?
Cloud Run cannot have a "private" IP for your service. In general, Cloud Run will be always have its own *.run.app.
Said that what you can do is to restrict the ingress of the service but you should keep in mind that if you set the service as Private or Private + Load Balancer it will be not reachable by API Gateway but by resources in the VPC.
Of course you can set an Internal Load Balancer + MIG as a proxy + Cloud Run private ingress but this increases the configuration overhead.
I think this will change in the future since there is a Feature Request to support Internal HTTPS Load balancers + Serverless NEGs and with the ingress Internal and Cloud Load Balancing you will have a "private" IP for your service (You can ask access for the preview here).
Answering your last question Is this a con for cloud run over GKE in terms of private VPC network? This is something you should evaluate according to your requirements and in general this particular question is an opinion-based which is off-topic. Consider the facts and choose what is better for you.

Google Cloud Platform - Pub/Sub push to private (VPN) on-premise listeners?

Official documentation for Pub/Sub service states that Push is available to listeners that are available on public network:
An HTTPS server with non-self-signed certificate accessible on the public web.
That sounds pretty clear - but I wonder if I haven't miss something. Is it in any way possible to have Pub/Sub service push messages to on-premise machines, that are not on public internet?
You should be able to achieve this with cloud Nat
Reserve a static IP
Link your DNS with this IP
Create a subnet
Create a route from this subnet to your VPN
Create a Nat with your external IP and which forward request to your subnet
Deploy an OnPrem webserver (apache, nginx) with valid certificate for your DNS
Update your OnPrem route for reaching your webserver and don't forget to route the flow back!
Is it in any way possible to have Pub/Sub service push messages to
on-premise machines, that are not on public internet?
Not easily, if at all. You might be able to use a Reverse Proxy. This introduces several layers to manage: proxy configuration, proxy compute instance, SSL Certificates, VPC routing, on-prem router, etc. See guillaume blaquiere's answer.
On-prem resource can reach Pub/Sub via public Internet or via VPN to private.googleapis.com but Pub/Sub cannot connect to on-prem or VPC resources configured with private IP addresses.
Cloud Pub/Sub push subscriptions require a publicly accessible HTTPS endpoint. If you want to reach on-premise machines, that would have to be done via a proxy/router accessible via the public internet (as others have mentioned). Cloud Pub/Sub does not currently support VPC for push subscriptions.
Please see the note section under https://cloud.google.com/pubsub/docs/push
Previous answers are outdated. You can use restricted Virtual IP with Private Google Access to provide a private network route for requests to Google Cloud services without exposing the requests to the internet.

AWS ECS Private and Public Services

I have a scenario where I have to deploy multiple micro-services on AWS ECS. I want to make services able to communicate with each other via APIs developed in each micro-service. I want to deploy the front-end on AWS ECS as well that can be accessed publicly and can also communicate with other micro-services deployed on AWS ECS. How can I achieve this? Can I use AWS ECS service discovery by having all services in a private subnet to enable communication between each of them? Can I use Elastic Load Balancer to make front-end micro-service accessible to end-users over the internet only via HTTP/HTTPS protocols while keeping it in a private subnet?
The combination of both AWS load balancer ( for public access) and Amazon ECS Service Discovery ( for internal communication) is the perfect choice for the web application.
Built-in service discovery in ECS is another feature that makes it
easy to develop a dynamic container environment without needing to
manage as many resources outside of your application. ECS and Route 53
combine to provide highly available, fully managed, and secure service
discovery
Service discovery is a technique for getting traffic from one container to another using the containers direct IP address, instead of an intermediary like a load balancer. It is suitable for a variety of use cases:
Private, internal service discovery
Low latency communication between services
Long lived bidirectional connections, such as gRPC.
Yes, you can use AWS ECS service discovery having all services in a private subnet to enable communication between them.
This makes it possible for an ECS service to automatically register
itself with a predictable and friendly DNS name in Amazon Route 53. As
your services scale up or down in response to load or container
health, the Route 53 hosted zone is kept up to date, allowing other
services to lookup where they need to make connections based on the
state of each service.
Yes, you can use Load Balancer to make front-end micro-service accessible to end-users over the internet. You can look into this diagram that shows AWS LB and service discovery for a Web application in ECS.
You can see the backend container which is in private subnet, serve public request through ALB while rest of the container use AWS service discovery.
Amazon ECS Service Discovery
Let’s launch an application with service discovery! First, I’ll create
two task definitions: “flask-backend” and “flask-worker”. Both are
simple AWS Fargate tasks with a single container serving HTTP
requests. I’ll have flask-backend ask worker.corp to do some work and
I’ll return the response as well as the address Route 53 returned for
worker. Something like the code below:
#app.route("/")
namespace = os.getenv("namespace")
worker_host = "worker" + namespace
def backend():
r = requests.get("http://"+worker_host)
worker = socket.gethostbyname(worker_host)
return "Worker Message: {]\nFrom: {}".format(r.content, worker)
Note that in this private architecture there is no public subnet, just a private subnet. Containers inside the subnet can communicate to each other using their internal IP addresses. But they need some way to discover each other’s IP address.
AWS service discovery offers two approaches:
DNS based (Route 53 create and maintains a custom DNS name which
resolves to one or more IP addresses of other containers, for
example, http://nginx.service.production Then other containers can
send traffic to the destination by just opening a connection using
this DNS name)
API based (Containers can query an API to get the list of IP address
targets available, and then open a connection directly to one of the
other container.)
You can read more about AWS service discovery and use cases amazon-ecs-service-discovery and here
According to the documentation, "Amazon ECS does not support registering services into public DNS namespaces"
In other words, when it registers the DNS, it only uses the service's private IP address which would likely be problematic. The DNS for the "public" services would register to the private IP addresses which would only work, for example, if you were on a VPN to the private network, regardless of what your subnet rules were.
I think a better solution is to attach the services to one of two load balancers... one internet facing, and one internal. I think this works more naturally for scaling the services up anyway. Service discovery is cool, but really more for services talking to each other, not for external clients.
I want to deploy the front-end on AWS ECS as well that can be accessed publicly and can also communicate with other micro-services deployed on AWS ECS.
I would use Service Discovery to wire the services internally and the Elastic Load Balancer integration to make them accessible for the public.
The load balancer can do the load balancing on one side and the DNS SRV records can do the load balancing for your APIs internally.
There is a similar question here on Stack Overflow and the answer [1] to it outlines a possible solution using the load balancer and the service discovery integrations in ECS.
Can I use Elastic Load Balancer to make front-end micro-service accessible to end-users over the internet only via HTTP/HTTPS protocols while keeping it in a private subnet?
Yes, the load balancer can register targets in a private subnet.
References
[1] https://stackoverflow.com/a/57137451/10473469

AWS & Azure Hybrid Cloud Setup - is this configuration at all possible (Azure Load Balancer -> AWS VM)?

We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.

Is it possible to secure a load balanced endpoint in Azure?

Is there any way I can have a load balanced endpoint that does not get exposed publicly in Azure?
My scenario is I have an endpoint running on multiple VM's. I can create a load balanced endpoint, but this creates a publicly available endpoint.
I only want my load balanced endpoint to be available for my web applications running in Azure (Web Workers and Azure Websites).
Is there any way to do this?
As #Brent pointed out, you can set up ACL's on Virtual Machine endpoints. One thing you mentioned in your question was the ability to restrict inbound traffic to only your web/worker role instances and Web Sites traffic.
You can certainly restrict traffic to web/worker instances, as each cloud service gets an IP address, so you just need to allow that particular IP address. Likewise, you can use ACLS to restrict traffic to other Virtual Machine deployments (especially in the case where you're not using a Virtual Network). Web Sites, on the other hand, don't offer a dedicated outbound IP address, so you won't be able to use ACLs to manage Web Sites traffic to your Virtual Machines.
Yes, Windows Azure IaaS supports ACL's on endpoints. Using this feature, you can restrict who connects to your load balanced endpoints. For more information see: https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-acl/