Clustering Eureka Servers in Google Cloud - google-cloud-platform

We are using Spring Cloud Netflix Eureka for Service Registration. We will be deploying all microservices in GCP (Google Cloud).
Environment
We have Eureka Servers running as a cluster.
Eureka Server registers themselves as client to its peer in application.properties
eureka.client.service-url.default-zone=http://xx.xx.xx.xxx:8762/eureka
Client microservices register/enroll themselves by
providing Eureka Server IPs in application.properties
eureka.client.service-url.default-zone=http://xx.xx.xx.xxx:8761:/eureka,http://xx.xx.xx.xxx:8762:/eureka
Since IP Address and hostnames are dynamic in cloud, can we configure Eureka Servers in cluster without using ipaddress/hostname.
Please provide a sample confiugration to use in Google Cloud.

gcloud maintains internal DNS resolver for subnets (if you are using default OS images).
So you can use host names to resolve IP addresses. Like prod-redis-2.c.project-<id>.internal.
You may probably need to configure links between subnets to avoid making IP addresses public.

I have not used GCP but have implemented and deployed spring cloud on PCF (which, on a higher level, is pretty much same as GCP).
You cannot make defaultZone completely dynamic. Why? Because these propeties are picked up during the application startup.
There needs to be something (some service or database) in your architecture that tells your services the dynamic hostnames/IP-addresses of other services. That is Eureka server in your case. All services needs to know the address (hostname/IP-address) of Eureka service. Now if Eureka server's hostname is dynamic, then how will your services know about the new hostname of Eureka server when that hostname changes?
You'll have to update the address of Eureka server manually only. What, at max, you can do is externalize defaultZone to a centralized configuration server (or something similar). That way you'll have to update the new address at one place only.

Related

GCP GKE - Loadbalancer alternative

I'm using lb for prod site. But for internal services (like gitlab, jenkins) I don't want to host a lb. Is there any alternative way to connect to internal services without the use of load balancers? Like could any bastion host do the job?
Having lb for prod and internal services seems to cost around 35 to 45 dollars. I'm trying to reduce the total bill.
I have a nginx ingress controller for production site, wondering if I could do something with it using subdomains for internal services.
Services on a Kubernetes cluster can talk to each other using their ClusterIP, without any use of a load balancer. The IP can be returned using the internal DNS service such as KubeDNS or CoreDNS
For more details
If you need to connect services in different clusters i’d take a look at Kubefed or Submarier for Multi-Cluster deployment.
Which by the way also harness the use of DNS service discovery (this is the term used for in-kubernetes service communication) for service-to-service communication.
Kubefed
Submariner

Run multiple servers with interconnection on Amazon AWS

We are developing applications and devices that communicate with our servers. We have one "main" Java Spring server which handles almost all the HTTP requests including user authentication, storing relevant user data and giving that data to the applications. Furthermore, we have a few smaller HTTP servers (written in golang) which are both used by the "main" server to perform certain tasks but also have some public API's that apps and devices use directly.
In our current non-production setup we run all the servers locally on one machine with an apache2 in front which directs the requests. So the servers can be accessed via the apache2 by a user by their respective subdomains but they also perform some communication between each other. When doing so, currently we simply send the request to localhost:{PORT} since they all run on the same machine. They furthermore all utilize the same mysql-server running on that same machine.
We are now looking to get it more production-ready and are looking to deploy it to AWS. They are currently not containerized so a solution that requires containerization (ECS? K8s?) would most likely require more work. What would be the most straightforward way to do the following:
Deploy a number of servers on AWS where they are exposed publicly with their respective domains but can also communicate internally with one another (or would they just communicate with one another using their public domains?)
Deploy a managed SQL database (Amazon RDS?) which is accessible for all the servers.
Setup the routing of the requests. Currently run our own configured apache2 but I assume we can add a managed API Gateway in AWS and configure it for our servers.
Q. Deploy a number of servers on AWS where they are exposed publicly
with their respective domains but can also communicate internally with
one another (or would they just communicate with one another using
their public domains?)
On AWS you create a VPC(1st default VPC is created when you login for the first time).
You can deploy a number of EC2 instances(virtual servers) with just private IP addresses and without any public access and put them behind an ELB(elastic load balancer). The ELB will take all the traffic and distribute the load onto the servers based on endpoint.
However the EC2 instances won't have public IPs A VPC(virtual Private Gateway) allows your services to communicate to each other via private IPs (something like 172.31.xx.xx), You can also provide domain/sub-domain names to these private IP addresses using Route53 service of AWS.
For example You launch 2 servers:
Your Java Application - on 172.31.1.1 (you name it
xyz.myjavaapp.something.com on Route53)
Your Angular Application - on 172.31.1.2
The angular application can reach your java application on 172.31.1.1:8080 or
xyz.myjavaapp.something.com:8080
Q. Deploy a managed SQL database (Amazon RDS?) which is accessible for
all the servers.
Yes you can deploy an SQL database on RDS and it will be available to the EC2 instances. Just make sure you create proper security groups to allow only your servers to access it, and not leave it open for public internet.
Example for a VPC only security group entry is 172.31.0.0/16 This will allow only ther servers in you VPC to connect to the RDS DB. given that your VPC subnet has the range 172.31.x.x
Q. Setup the routing of the requests. Currently run our own configured
apache2 but I assume we can add a managed API Gateway in AWS and
configure it for our servers.
You can set up public/private APIs and manage different endpoints using API Gateway.
Another way it to put your application server behind an Application ELB. The ELB can take care of load balancing as well as endpoint management.
for example :
if you decide to deploy 2 servers for /getData and 1 server for /doSomethingElse. It can be easily managed by ELB.
I would suggest you use at-least servers for critical services and load balance them behind and ELB for production env.
On another note, containerizing and deploying to kubernetes is not that difficult or time consuming. But yes it has got some learning curve, but the benefits outweigh it.
Feel free to ask questions.

How do I pass Eureka service ip to my java application running in a docker container?

I have number of java and python services running in docker containers in clustered environment. I'm using Eureka for service discovery and it works fine locally with Eureka ip address hardcoded in application configuration files. I have problem with flexible configuration of Eureka service for Java services - docker containers with the services will be deployed in three environments where Eureka will have different ip addresses.
Is there a way to pass Eureka URI using e.g. JVM environment variable?
Or if I pass the URI as an application argument, how can I get it propagated to the Eureka client configuration?
PS: I use AWS ECS and due to number of services and existing AWS constraints I cannot put all docker containers in a single task definition, cannot use docker names resolving and just hardcode Eureka hostname, on the other hand I might have multiple Eureka instances and would like to specify which one particular container should use.
The answer to this my question would be to use configuration server, description of this beast can be found here https://dzone.com/articles/using-spring-config-server.

Eureka with AWS ECS

We are using Eureka with AWS ECS service that can scale docker containers.
In ECS if you leave out the host port, or specify it as being '0', in your task definition, then the port will be chosen automatically and reported back to the service. After the task is running, describing it should show what port(s) it bound to.
How does Eureka can resolve what port to use if we have several EC2 instance. For example Service A from EC2-A try to call Service B from EC2-B. So Eureka can resolve hostname , but cannot identify exposed port
Hi #Aleksandr Filichkin,
I don't think Application Load Balancer and service registry does the same.
The main difference traffic flows over the (application) load balancer whereas the service registry just gives you a healthy endpoint that your client directly can address (so the network traffic does not flow over the service registry).
Cheap is a very relative term, maybe it's cheap for some, maybe it's an unnecessary overhead for others.
The issue was resolved
https://github.com/Netflix/eureka/issues/937
Currently ECS agent knows about running port.
But I don't recommend to use Eureka with ECS, because Application Load Balancer does the same. It works as service registry and discovery. You don't need to run addition service(Eureka), ALB is cheap.
There is another solution.
You can create an application loadbalancer and a target group, in which the docker containers can be launched.
Every docker container has set their hostname to the hostname of the loadbalancer. If you need a pretty url, then you can utilize Route53 for DNS-Routing.
It looks like this:
Service Discovery with Loadbalancer-Hostname
Request Flow
If you have two containers of the same task on different hosts, both will communicate the same loadbalancer hostname to eureka.
With this solution you can use eureka with docker on AWS ECS without loosing the advantages and flexibility of dynamic port mapping.

Pre-deploy development communication with an Internal Kubernetes service

I'm investigating a move to Kubernetes (coming from AWS ECS). But I haven't solved the local development issue when depending on internal services.
Let me elaborate:
When developing and testing microservices, before they are deployed as a Kubernetes Service I want to be able to talk to other, internal Kubernetes Services. As there are > 20 microservices I have a Kubernetes cluster running latest development versions. I can't run a MiniKube.
example:
I'm developing an user-service which needs access to the email service. The Email service is already on Kubernetes and is an internal service.
So before the user-service is deployed I want to be able to talk to the internal email service for dev/testing. I can't make use of K8S nice service discovery env vars.
As we currently already have a VPN up to restrict DEV env to testers/development only, could I use this VPN to provide access to the Kubernetes-Service IP-addresses? I do have Kubernetes DEV-env on the same VPC as the VPN is in.
If you deploy your internal services as type NodePort, then you can access them over your VPN via that nodePort. NodePorts can be dynamically allocated or you can customize them to be 'static' where they are known by you up front.
When developing an app on your local machine, you can access the dependent service by that NodePort.
As an alternative, you can use port-forwarding from kubectl (https://kubernetes.io/docs/user-guide/connecting-to-applications-port-forward/) to forward a pod to your local machine. (Note: This only handles traffic to a pod not a service).
Telepresence (http://telepresence.io) is designed for this scenario, though it presumes developers have kubectl access to the staging/dev cluster.