Using Consul for tracking down all the REST endpoints - web-services

Requirement : 5 different internal apps exposing REST endpoints. More applications are coming which would expose further new endpoints. Need the ability when these apps are deployed, for the new endpoints to auto register.. and some kind of a web ui which would list all the endpoints in the cluster.
We are evaluating consul.io, and I am not sure if this would be a good fit for the requirement.
I went through the consul tutorial, and I am confused on the concept of service registration.
How would one register a RESTful endpoint when the app server comes up ?
Also if app1 wants to know the exact endpoint address exposed by app2, would app1 be able to query consul for that ?
Any other git repo to checkout ?
PS. applications are built on play framework.

Let your apps register themselves with the consul agent when they come online. To achieve this, you can make use of this consul api call.
When app2 is registred, app1 can query the consul dns service or the consul api to get the endpoint address of app2.
Note that when making use of the DNS functionality, app2 will only show up if the app is marked as healthy (all registred checks for that host and service are marked as passing).
If you want to make use of the api calls you have more options.
This call will fetch all services with the specified name, not taking the healthcheck of the services in account. (but if the host is offline, the service will not be listed)
This call will fetch all services with the specified name along with the status of their healthchecks. If you add "?passing" to the url then only the healthy services will be displayed (like using the dns option).

The problem (and a solution) is nicely explained in Jeff's article Automatic Docker Service Announcement with Registrator. In summary, if your services are Docker-ised, you'd use gliderlabs/registrator to auto-register/de-register services as they come and go, and then look them up in Consul. Here's a complete tutorial on how this can be done.

Related

How to keep the endpoint URL same when deploying the microservice on different servers

I am working on some microservices which gets called by some other applications. Sometimes I have to move the microservices to different servers and every time I do it, I have to inform other teams and ask them to update there applications to call the new endpoints which is on new server.
Is there a way to keep the URLs same for the consumers so that I can deploy the service anywhere and they dont have to update the endpoint from there end every time.
Thank you,
JW
If you change the server, the URL gets changed.
Directly exposing the individual URLs of microservices is not a good idea.
If you are using the Kubernetes, you can connect to the microservice through the service name of the microservice if that microservice is also part of that Kubernetes cluster.
like HTTP:://serivice-name-8080/customer/{id}
You can use API gateway, which will be the single point of contact to connect your microservices. In this case, you don't need to change the relative URL.
like HTTP:://{API-GATEWAY}/customer/{id}

How to expose just one microservice from kubernetes cluster outside for existing load balancer and other services - only within cluster

Hello and thanks in advance.
First I want to provide some context to make answering my question easier.
We are using Google Cloud.
We got to a situation when our needs to be able to deploy updates easily for various parts of application bumped into our monolith architectures limitations.
Our app is not super big, but it already has 2 physical services - backend (scope being updated), and caching server which caches data and makes search in a mongo-like way over data from Google Datastore.
We have 2 options here.
"plugins" - like nanoservices running within same process which are developed in a way that these nanoservices do not know they are on the same process, all they know is a set of "plugins shell API" injected at activation of a nanoservice code. This shell gives the nanoservice access to a database, logging, configuration, routes registration, control events like refresh pages map and some metadata like website root url and root of static content deployed as supply stuff for a service version. Like https://static.server.com/deployments/foo/v2
standard microservices on kubernetes where same API mentioned exposed to each service via "shell client" package deployed as part of container image.
In short, this is a common "infrastructure vs library" dilemma, often mentioned in articles about microservices i read.
For library approach I have some vision already on how to implement all that including hot modules replacement without stopping server but the more i read about kubernetes the stronger I feel that I am inventing kubernetes (or similar) wheel.
How I imagine that:
1) there is a router service which is single service exposed outside from the cluster. When a page is requested, and this service attached to the load balancer we already have as backend.
It will handle authentication/authorization of outside requests, and pick the page to be rendered or API endpoint to be invoked. When a page requested, the related template is loaded, and data for pre-rendering is picked by calling related endpoint exposed by module service. When public API endpoint is picked, the matching service endpoint is called.
There are few services, including:
caching service (that service which deployed now at separate servers group, and what
updates service, which process the module services version switch and provides API to do so via some UI for admins.
modules services (one per modules). Each module exposes endpoints for providing preloaded pages data, endpoint to give list of pages routes to be registered, API endpoints implementations, and endpoint to list exposed API routes to be invoked through router service.
router service which process external requests and dispatch them across other services when appropriate using cached routes map, updated in case if one of internal services broadcasts pages map refresh event, e. g. updates service.
What is stopping me from starting to use kubernetes right away is the lack of knowledge about how to implement the following scenarios:
1) only 1 microservice must be exposed outside cluster, the "routing service".
2) reuse builtin services discovery etc to communicate with services within cluster like caching server.
3) Cluster's router service would be attached to cloud load balancer we already have as a backend.
In my opinion, you should have a look at NGINX Ingress Controller to build your routing scheme, more information you can find here and here.
EDIT In addition, you can try some other ingress controllers and among them Istio and Traefik definitely worth your attention as alternative solutions to NGINX Ingress Controller.

Spring Boot - Different systems( eureka , zuul, ribbon, nginx,) used for what?

I have been working with spring and now would like to learn spring boot and microservices. I understand what microservice is all about and how it works. While going through docs i came across many things used to develop microservices along with spring boot which i am very much confused.
I have listed the systems below.and the questions:
Netflix Eureka - I understand this is service discovery platform.
All services will be registered to eureka server and all
microservices are eureka clients. Now my doubt is , without having
an API gateway is there any use with this service registry ? This is
to understand the actual use of service registry.
ZUULApi gateway- I understand ZUUL can be used as API gateway which is basically a load balancer , that calls appropriate
microservice corresponding to request URL. iS that assumption
correct? will the api gateway interact with Eureka for getting the
appropriate microservice?
NGINX - I have read NGINX can also be used as API gateway? Is that possible? Also i read some where else like NGINX can be used as a service registry , that is as an alternate for Eureka ! Thus which is right? Api gateway or service registry or both? I know nginx is a webserver and reverse proxies can be powerfully configured.
AWS api gateway - Is this can also be used as an alternate for ZUUL?
RIBBON - for what ribbon is used? I didn't understand !
AWS ALB- This can also be used for load balancing. Thus do we need ZUUL if we have AWS ALB?
Please help
without having an API gateway is there any use with this service registry ?
Yes. For example you can use it to locate (IP and port) of all your microservices. This comes in handy for devops type work. For example, at one project I worked on, we used Eureka to find all instances of our microservices and ping them for their status (/health, /info).
I understand ZUUL can be used as API gateway which is basically a load balancer , that calls appropriate microservice corresponding to request URL. iS that assumption correct?
Yes but it can do a lot more. Essentially because Zuul is more of a framework/library that you turn into a microservice, you can code it to implement any kind of routing logic you can come up with. It is very powerful in that sense. For example, lets say you want to change how you route based on time of day or any other external factors, with Zuul you can do it.
will the api gateway interact with Eureka for getting the appropriate microservice?
Yes. You configure Zuul to point to Eureka. It becomes a client to Eureka and even subscribes to Eureka for realtime updates (which instances have joined or left).
I have read NGINX can also be used as API gateway? Also i read some where else like NGINX can be used as a service registry , that is as an alternate for Eureka ! Thus which is right? Api gateway or service registry or both?
Nginx is pretty powerful and can do API gateway type work. But there are some major differences. AFAIK, microservices cannot dynamically register with Nginx, please correct me if I am wrong... as they can with Eureka. Second, while I know Nginx is highly (very highly) configurable, I suspect its configuration abilities do not come close to Zuul's routing capabilities (due to having the whole Java language at your disposal within Zuul to code your routing logic). It could be the case that there are service discovery solutions that work with Nginx. So Nginx will take care of the routing and such, but service discovery will still require a solution.
Is this can also be used as an alternate for ZUUL?
Yes AWS API Gateway can be used as a Zuul replacement of sorts. The issue here, just like Nginx, is service discovery. AWS API Gateway lets you apply logic to your routing... though not as open ended as Zuul.
for what ribbon is used?
While you can use the Ribbon library directly, for the most part consider it as an internal dependency of Zuul. It helps Zuul do the simple load balancing that it does. Please note that this project is in maintenance mode and not recommended any more.
This can also be used for load balancing. Thus do we need ZUUL if we have AWS ALB?
You can use ALB with ECS (elastic container service) to replace Eureka/Zuul. ECS will take care of the service discover for you and will map all instances of a particular service to a Target Group. Your ALB routing table can then route to Target Groups based on simple routing rules. The routing rules in ALB are very simple though, but improving over time.
Different systems which can be used for the working of microservices, that comes along with spring boot:
Eureka:
Probably the first microservice to be UP. Eureka is a service registry, means , it knows which ever microservices are running and in which port. Eureka is deploying as a sperate application and we can use #EnableEurekaServer annotation along with #SpringBootAPplication to make that app a eureka server. So our eureka service registery is UP and running. From now on all microservices will be registered in this eureka server by using #EnableDiscoveryClient annotation along with #SpringBootAPplication in all deployed microservices.
Zuul: ZUUL is a load balancer , routing application and reverse proxy server as well. That is before we were using apache for reverse proxy things , now , for microservices we can use ZUUL. Advantage is, in ZUUL we can programatically set configurations, like if /customer/* comes go to this microservice like that. Also ZUUL can act as a load balancer as well , which will pick the appropriate microservice in a round robin fashion. SO how does the ZUUL knows the details of microservices, the answer is eureka. It will work along with eureka to get microservice details. And in fact this ZUUL is also a Eureka client where we should mark using #EnableDiscoveryClient, thats how these 2 apps(Eureka and zuul) linked.
Ribbbon:
Ribbon use for load balancing. This is already available inside ZUUL, in which zuul is using Ribbon for load balancing stuff. Microservices are identified by service-name in properties file. IF we run 2 instances of one microservices in different port, this will be identified by Eureka and along with Ribbon(Inside zuul), requests will be redirected in a balanced way.
Aws ALB , NGINX , AWS Api gateway etc: There are alternatives for all the above mentioned things. Aws is having own load balancer, service discovery , api gateway etc . Not only AWS all cloud platofrms ,like Azure, have these. Its depends which one to use.
Adding a general question as well , How these microservices communicate each other: Using Resttemplate or Feignclient actual rest API can be called or Message queues like Rabbit MQ etc can be used .
Eureka can be used in conjunction with NGINX, which leads to very powerful combination.
I am using it on AWS EC2 environment. Previously instead of NGINX I was using Spring Cloud Gateway and before that Zuul. Depending of the load Spring Cloud Gateway was running on AWS t3.medium or t3.large instances. After moving to NGINX I am using t3.micro (8 times less memory) instance. I am almost sure that I can do the trick and with t3.nano (16 times less memory) instance, but I wanted to be sure that there will be no surprises.
Below are the high level steps what you have to do in order to plug NGINX in the Eureka ecosystem. More details you can find in NGINX With Eureka Instead of Spring Cloud Gateway or Zuul article.
Create a service which can read the configuration of all applications from Eureka and to 'translate' it to NGINX configuration.
Create a cronjob entry which at certain period will read the configuration from the above service and will call the NGINX hot reload
NGINX which will consume the configuration produced from the service and the cronjob and will work as API Gateway

Kubernetes front end deployment timing out when requesting api deployment

Let me start this by saying I am fairly new to k8s. I'm using kops on aws.
I currently have 3 deployments on a cluster.
FrontEnd nginx image serving an angular web app. One pod. External service.
socket.io server. Internal service. (this is a chat application, and we decided to separate this server from our api. Was this a good idea?)
API that is requested by both the socket.io server and the web application. Internal Service (should it be external?)
The socket.io deployment and API seem to be able to communicate through the cluster ips and corresponding services I have set up for the deployments; however, the webapp times out when querying the API.
From the web app, I am querying the API using the API's cluster IP address. Should I be requesting a different address?
Additionally, what is the best way to configure these addresses in my files without having to change the addresses in the files each time I create a new deployment? (the cluster ip addresses change every time you tare down and recreate the deployment)
If I understood correctly your frontend web application depends on API server, so that it sends requests to it. In such case, your API service should be available from outside of the cluster. It means it should be exposed as the NodePort or LoadBalancer service type.
P.S. you can refer to service using ClusterIP only inside of the cluster.

How to implement service as app in DEA?

I am trying to create a clustered cache service for Cloud Foundry. I understand that I need to implement Service Broker API. However, I want this service to be clustered, and in the Cloud Foundry environment. As you know, container to container connection (TCP) is not supported yet, I don't want to host my backend in another environment.
Basically my question is almost same as this one: http://grokbase.com/t/cloudfoundry.org/vcap-dev/142mvn6y2f/distributed-caches-how-to-make-it-work-multicast
And I am trying to achieve this solution he adviced:
B) is to create a CF Service by implementing the Service Broker API as
some of the examples show at the bottom of this doc page [1] .
services have no inherant network restrictions. so you could have a CF
Caching Service that uses multicast in the cluster, then you would
have local cache clients on your apps that could connect to this
cluster using outbound protocols like TCP.
First of all, where does this service live? In the DEA? Will backend implementation be in the broker itself? How can I implement the backend for scaling the cluster, start the same service broker over again?
Second and another really important question is, how do the other services work if TCP connection is not allowed for apps? For example, how does a MySQL service communicates with the app?
There are a few different ways to solve this, the more robust the solution, the more complicated.
The simplest solution is to have a fixed number of backend cache servers, each with their own distinct route, and let your client applications implement (HTTP) multicast to these routes at the application layer. If you want the backend cache servers to run as CF applications, then for now, all solutions will require something to perform the HTTP multicast logic at the application layer.
The next step would be to introduce an intermediate service broker, so that your client apps can all just bind to the one service to get the list of routes of the backend cache servers. So you would deploy the backends, then deploy your service broker API instances with the knowledge of the backends, and then when client apps bind they will get this information in the user-provided service metadata.
What happens when you want to scale the backends up or down? You can then get more sophisticated, where the backends are basically registering themselves with some sort of central metadata/config/discovery service, and your client apps bind to this service and can periodically query it for live updates of the cache server list.
You could alternatively move the multicast logic into a single (clustered) service, so:
backend caches register with the config/metadata/discovery service
multicaster periodically queries the discovery service for list of cache server routes
client apps make requests to the multicaster service
One difficulty is in implementing the metadata service if you're doing it yourself. If you want it clustered, you need to implement a highly-available-ish consistent-ish datastore, it's almost the original problem you're solving except the service handles replicating data to all nodes in the cluster, so you don't have to multicast.
You can look at https://github.com/cloudfoundry-samples/github-service-broker-ruby for an example service broker that runs as a CF application.