Apologies if this is a silly question.
I have a Spring-Boot application packaged as a jar file.
When I deploy it to CF (cf push) the ReST endpoint is available based on the CF application context.
Now, if I have CF scale this application over multiple nodes, does CF automatically handle the routing of ReST calls to the application instance or do I need to add a gateway like Netflix eureka/zuul to my application?
Thanks,
Greg
Requests to apps running on Cloud Foundry flow through the Cloud Foundry gorouter which will automatically load balance the requests to all your instances. You do not need Eureka or Zuul to get loadbalancing to your rest end point.
Eureka is useful in you have apps deployed to Cloud Foundry that you pushed without a route and are using container to container networking and client side load balancing.
Related
and trying to deploy micro-services build in spring boot on aws but didn't know which aws service is suitable for perticular spring micro-service(Could Config, Service Discovery, Api Gatway, and vault).
I build an api gateway service on spring boot, but when it comes to deployment on aws i got confused with the aws api gateway.
Do we need both of the to work together? or we can just setup springBoot Api gatway on ec2 instance.
And its out of context but, do we need separate ec2 for small service like 'Service Discovery', 'Config Service' etc.
thanks
API Gateway is just a kind of routing to your application, no matter if it is hosted on serverless platform or on EC2 container.
You can try to deploy your Spring Boot app on AWS Lambda environment and this way you don't have to think of configuring the server environment. You have to be awarded the cold start of the application in this case. You can google more about it how to solve this problem.
API Gateway is like facade in front of your microservices for communication with external services. There are several ways to use/implement API gateway depending on requirements such as Request Routing, API composition(calling multiple services and combining responses), Authentication, Caching etc.
AWS API gateway is good if you need request routing feature but it can't perform API composition. In such case you need to implement your own custom API gateway using technologies such as Spring Cloud Gateway & Reactive programming.
GraphQL is another popular technology to implement API Gateway.
P.S. - Service Discovery is another concept. In real life you will use Kubernetes or Service Mesh which will internally do Service Registry and Discovery.
I am trying to containerize my angular+java app in Kubernetes cluster. I have a frontend deployment and a backend deployment in my k8 cluster. My database is in AWS{RDS}. But i am confused that what API-URL should i give in my Frontend code so that it can get connected to my backend app in k8 cluster.
For e.g :-
In local system i use something like {localhost:8080/api/customers} in my Frontend code but what should i change it to at the time of deploying in Kubernetes cluster.
I have a Kubernetes cluster setup with 1 master and 2 slave nodes, I created a deployment of my backend app and exposed it through Cluster Ip, and than i gave this cluster ip and port in my frontend application.
After that i pushed the image to docker hub and than created a k8 deployment for it, but still its not working.
My main ask is what URL and Port should i mention in my Frontend application target URL so that it can find hit my java APIs.
The front end angular application is running inside the browser of a user. This is outside of the kubernetes Cluster and you therefore can not use the kubernetes Service Name as api endpoint.
You need to make the spring boot api accessible from outside of kubernetes, usually using an ingress or load balancer. You use this external ip or host name as api url in the angular application.
if your two applications run in the same kubernetes cluster so you would have to call your backend service like this: svcname:port for example
http://login:8080/login
This assuming the pods for your frontend are on the same Kubernetes namespace. If they are on a different namespace you would call something like this:
http://login.<namespace>.svc.cluster.local:5555/login
Exposing my back-end service to a Load Balancer, and than using that Load Balancer endpoint in my Front-end application worked for me.
I have deployed a kubernetes cluster using kops. The current cluster uses an nginx ingress controller which creates a classic load balancer in AWS. I have some backend applications that talk to the frontend application and some backend services that just talk to each other. The problem is that that the only way currently to make the frontend app talk to the backend apps is by creating an ingress for the backend apps since the frontend sends requests via the domain name since it doesn't understand the internal service names. For backends, it is fine since they can talk internally just by using the service name and their respective port. How can I achieve this without having to create ingress for backends. Is it possible to do that using an Application load balancer or do I need to have an API gateway for that? How do I achieve this architecture? Adding an architecture diagram to show what I want to achieve. Any help is appreciated.
From your "architecture diagramm" it looks like all your applications are within the cluster. So no need for ingress. You can just use kubernetes services.
Your frontend app should be able to call the endpoints of the backend services otherwise you made something wrong in the configuration of the frontend service.
If you have no chance to change the URL which the frontend app calls for backend services, you can use for example a kubernetes service with CNAME and redirect to your internal services.
You dont need ingress to connect backend from frontend.
assuming both backend and frontend pods are running in the same kubernetes cluster. frontend service can connect backend service using service dns
backend-service.<namespace>.svc.cluster.local
I have been working with spring and now would like to learn spring boot and microservices. I understand what microservice is all about and how it works. While going through docs i came across many things used to develop microservices along with spring boot which i am very much confused.
I have listed the systems below.and the questions:
Netflix Eureka - I understand this is service discovery platform.
All services will be registered to eureka server and all
microservices are eureka clients. Now my doubt is , without having
an API gateway is there any use with this service registry ? This is
to understand the actual use of service registry.
ZUULApi gateway- I understand ZUUL can be used as API gateway which is basically a load balancer , that calls appropriate
microservice corresponding to request URL. iS that assumption
correct? will the api gateway interact with Eureka for getting the
appropriate microservice?
NGINX - I have read NGINX can also be used as API gateway? Is that possible? Also i read some where else like NGINX can be used as a service registry , that is as an alternate for Eureka ! Thus which is right? Api gateway or service registry or both? I know nginx is a webserver and reverse proxies can be powerfully configured.
AWS api gateway - Is this can also be used as an alternate for ZUUL?
RIBBON - for what ribbon is used? I didn't understand !
AWS ALB- This can also be used for load balancing. Thus do we need ZUUL if we have AWS ALB?
Please help
without having an API gateway is there any use with this service registry ?
Yes. For example you can use it to locate (IP and port) of all your microservices. This comes in handy for devops type work. For example, at one project I worked on, we used Eureka to find all instances of our microservices and ping them for their status (/health, /info).
I understand ZUUL can be used as API gateway which is basically a load balancer , that calls appropriate microservice corresponding to request URL. iS that assumption correct?
Yes but it can do a lot more. Essentially because Zuul is more of a framework/library that you turn into a microservice, you can code it to implement any kind of routing logic you can come up with. It is very powerful in that sense. For example, lets say you want to change how you route based on time of day or any other external factors, with Zuul you can do it.
will the api gateway interact with Eureka for getting the appropriate microservice?
Yes. You configure Zuul to point to Eureka. It becomes a client to Eureka and even subscribes to Eureka for realtime updates (which instances have joined or left).
I have read NGINX can also be used as API gateway? Also i read some where else like NGINX can be used as a service registry , that is as an alternate for Eureka ! Thus which is right? Api gateway or service registry or both?
Nginx is pretty powerful and can do API gateway type work. But there are some major differences. AFAIK, microservices cannot dynamically register with Nginx, please correct me if I am wrong... as they can with Eureka. Second, while I know Nginx is highly (very highly) configurable, I suspect its configuration abilities do not come close to Zuul's routing capabilities (due to having the whole Java language at your disposal within Zuul to code your routing logic). It could be the case that there are service discovery solutions that work with Nginx. So Nginx will take care of the routing and such, but service discovery will still require a solution.
Is this can also be used as an alternate for ZUUL?
Yes AWS API Gateway can be used as a Zuul replacement of sorts. The issue here, just like Nginx, is service discovery. AWS API Gateway lets you apply logic to your routing... though not as open ended as Zuul.
for what ribbon is used?
While you can use the Ribbon library directly, for the most part consider it as an internal dependency of Zuul. It helps Zuul do the simple load balancing that it does. Please note that this project is in maintenance mode and not recommended any more.
This can also be used for load balancing. Thus do we need ZUUL if we have AWS ALB?
You can use ALB with ECS (elastic container service) to replace Eureka/Zuul. ECS will take care of the service discover for you and will map all instances of a particular service to a Target Group. Your ALB routing table can then route to Target Groups based on simple routing rules. The routing rules in ALB are very simple though, but improving over time.
Different systems which can be used for the working of microservices, that comes along with spring boot:
Eureka:
Probably the first microservice to be UP. Eureka is a service registry, means , it knows which ever microservices are running and in which port. Eureka is deploying as a sperate application and we can use #EnableEurekaServer annotation along with #SpringBootAPplication to make that app a eureka server. So our eureka service registery is UP and running. From now on all microservices will be registered in this eureka server by using #EnableDiscoveryClient annotation along with #SpringBootAPplication in all deployed microservices.
Zuul: ZUUL is a load balancer , routing application and reverse proxy server as well. That is before we were using apache for reverse proxy things , now , for microservices we can use ZUUL. Advantage is, in ZUUL we can programatically set configurations, like if /customer/* comes go to this microservice like that. Also ZUUL can act as a load balancer as well , which will pick the appropriate microservice in a round robin fashion. SO how does the ZUUL knows the details of microservices, the answer is eureka. It will work along with eureka to get microservice details. And in fact this ZUUL is also a Eureka client where we should mark using #EnableDiscoveryClient, thats how these 2 apps(Eureka and zuul) linked.
Ribbbon:
Ribbon use for load balancing. This is already available inside ZUUL, in which zuul is using Ribbon for load balancing stuff. Microservices are identified by service-name in properties file. IF we run 2 instances of one microservices in different port, this will be identified by Eureka and along with Ribbon(Inside zuul), requests will be redirected in a balanced way.
Aws ALB , NGINX , AWS Api gateway etc: There are alternatives for all the above mentioned things. Aws is having own load balancer, service discovery , api gateway etc . Not only AWS all cloud platofrms ,like Azure, have these. Its depends which one to use.
Adding a general question as well , How these microservices communicate each other: Using Resttemplate or Feignclient actual rest API can be called or Message queues like Rabbit MQ etc can be used .
Eureka can be used in conjunction with NGINX, which leads to very powerful combination.
I am using it on AWS EC2 environment. Previously instead of NGINX I was using Spring Cloud Gateway and before that Zuul. Depending of the load Spring Cloud Gateway was running on AWS t3.medium or t3.large instances. After moving to NGINX I am using t3.micro (8 times less memory) instance. I am almost sure that I can do the trick and with t3.nano (16 times less memory) instance, but I wanted to be sure that there will be no surprises.
Below are the high level steps what you have to do in order to plug NGINX in the Eureka ecosystem. More details you can find in NGINX With Eureka Instead of Spring Cloud Gateway or Zuul article.
Create a service which can read the configuration of all applications from Eureka and to 'translate' it to NGINX configuration.
Create a cronjob entry which at certain period will read the configuration from the above service and will call the NGINX hot reload
NGINX which will consume the configuration produced from the service and the cronjob and will work as API Gateway
Let me start this by saying I am fairly new to k8s. I'm using kops on aws.
I currently have 3 deployments on a cluster.
FrontEnd nginx image serving an angular web app. One pod. External service.
socket.io server. Internal service. (this is a chat application, and we decided to separate this server from our api. Was this a good idea?)
API that is requested by both the socket.io server and the web application. Internal Service (should it be external?)
The socket.io deployment and API seem to be able to communicate through the cluster ips and corresponding services I have set up for the deployments; however, the webapp times out when querying the API.
From the web app, I am querying the API using the API's cluster IP address. Should I be requesting a different address?
Additionally, what is the best way to configure these addresses in my files without having to change the addresses in the files each time I create a new deployment? (the cluster ip addresses change every time you tare down and recreate the deployment)
If I understood correctly your frontend web application depends on API server, so that it sends requests to it. In such case, your API service should be available from outside of the cluster. It means it should be exposed as the NodePort or LoadBalancer service type.
P.S. you can refer to service using ClusterIP only inside of the cluster.