I currently have two services in GCP, one is Google App Engine Flexible Environment and the other is Cloud Run and the idea is that through a service balancer I can distribute the traffic between those 2 services. Does anyone know how I can proceed?
You can create an HTTP(S) Load Balancer with two backend-configs one for AppEngine and the other for CloudRun.
Consider these notes:
Your LB will direct requests to each backend depending on the PATH and each backend will handle balancing depending on the the specs that you have configured for this backend.
You will create and configure each backend separately. and both will be of type Serverless Network Endpoint Group (Serverless NEG), See here for details.
Related
Google has recently enabled centralised load balancing with cross project service referencing.
I have successfully implemented a shared VPC with the regional load balancer in a Host project. The load balancer works in handing traffic off into a sub project backend.
This is all good.
Previously I had been using a global load balancer and was able to use Identity Aware Proxy on this service backend. Now that things are restructured, the option to use IAP has disappeared.
I am not sure whether this is a limitation of the cross project style of load balancing or whether I am missing something. The backend service in question is serverless Cloud Run.
I have tried looking at various options in the load balancer setup, backend setup, shared VPC setup but nothing seems obvious. Also reviewed the documentation, the feature being relatively new, there isn't much written on it yet.
My client is asking me to deploy web application (nodejs backend+reactjs frontend) on two EC2 servers. Inorder to achieve good load balancing and autoscaling based on traffic,
Note: client doesn't want to go for single server of high version
There are multiple ways of achieving a satisfying architecture for this problem. If we are looking into using EC2 instances we can do the following:
Deploy your back-end into Target Group for an Auto Scaling Group and put an Application Load Balancer in front of it. Instances can automatically register to the load balancer, which can distribute traffic between them.
Deploy your static front-end application into an S3 bucket, if necessary, use a CloudFront distribution for caching and faster loads.
Assuming the front-end is a SPA (browser-generated HTML), then host the React part on S3 + CloudFront
Regarding deploying Node on EC2:
Use CloudFormation to setup the infrastructure (the EC2 machines, ASG, and Load Balancer)
Then use CodeDeploy to deploy / update the application
Here is a post on deploying Node.js using CodeDeploy: https://hub.packtpub.com/deploy-nodejs-apps-aws-code-deploy/
You might find it easier to use Elastic Beanstalk though
We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.
So I Install and set up new a WordPress installation via Google Cloud. I Also set up HTTP/s Load Balancer for cloud CDN. The LB is now working. Please help on how I can connect this too? so that my site will run with the cloud CDN. Thanks in advance
Since you would like to setup HTTP(S) Load Balancer for Cloud CDN, you need to configure at least Backend and Frontend configuration at “HTTP(S) Load Balancing”.
In Load Balancing Backend configuration you have the option of selecting Instance Group. So before creating “backend service” you need to create a instance group. Since you already created WordPress VM instance, you should create a unmanaged instances group. In Backend configuration then you will have the option of Enable Cloud CDN and selecting Health Check. So along with Instance Group please also create a Health Check following this documentation.
After completing Backend configuration you will have to configure the Frontend which is very straight forward.The front end is your virtual IP (VIP) or called anycast IP in GCP. One front end can service multiple regions (backends). In most cases you would want a static or reserved IP and not the default ephemeral. This way you can easily point an a-record on your Cloud DNS zone file to your load balancer IP.
We've currently got a production application using Kubernetes on AWS. Everything's working very well except I think we've made a misconfiguration problem.
We expose different services from within the cluster on domain names and we're now up to about 5 different services. Kubernetes' standard way to expose these services is through load balancers, but in our config we've created 6 load balancers. As you can imagine that many load balancers running can incur substantial cost overheads.
Is there any way to configure an individual load balancer to route to kubernetes targets based on domain names? So we can have one domain pointing at an ELB and have that route to the correct services internally?
You can use Ingress controller. Ingress will setup a single AWS load balancer and can be used to expose many services. If you services are all HTTP based, it should work quite well. For more information about ingress you can have a look to the Kubernetes docs or at the default Nginx based implementation. If needed there are also some other implementations using for example Envoy proxy etc.