So we are migrating to Azure from a traditional IIS hosted environment.
The situation with our current solution is that there are three web services hosted in IIS under one single site. The services are all accessible via http://api.example.com/ServiceName.svc
For backward compatibility we need the same availability from Azure, Cloud Web Services. If the services are deployed as separates then they have different DNS names. And if they are bundled together then they must operate on different ports. That is my understanding anyway, meaning we are snookered.
Is it possible to get around this at all?
One solution I had was to create a very light proxy/router service that accepted requests for the three services and then routed to the appropriate internal service based on the full URL and not just the host.
I'm very new to Azure so perhaps there is something under the bonnet that can help with all of this.
Thanks
If your services are all under a single project, you should have no issue deploying them to a single cloud service (and thus, a single host name/port).
Related
Hello and thanks in advance.
First I want to provide some context to make answering my question easier.
We are using Google Cloud.
We got to a situation when our needs to be able to deploy updates easily for various parts of application bumped into our monolith architectures limitations.
Our app is not super big, but it already has 2 physical services - backend (scope being updated), and caching server which caches data and makes search in a mongo-like way over data from Google Datastore.
We have 2 options here.
"plugins" - like nanoservices running within same process which are developed in a way that these nanoservices do not know they are on the same process, all they know is a set of "plugins shell API" injected at activation of a nanoservice code. This shell gives the nanoservice access to a database, logging, configuration, routes registration, control events like refresh pages map and some metadata like website root url and root of static content deployed as supply stuff for a service version. Like https://static.server.com/deployments/foo/v2
standard microservices on kubernetes where same API mentioned exposed to each service via "shell client" package deployed as part of container image.
In short, this is a common "infrastructure vs library" dilemma, often mentioned in articles about microservices i read.
For library approach I have some vision already on how to implement all that including hot modules replacement without stopping server but the more i read about kubernetes the stronger I feel that I am inventing kubernetes (or similar) wheel.
How I imagine that:
1) there is a router service which is single service exposed outside from the cluster. When a page is requested, and this service attached to the load balancer we already have as backend.
It will handle authentication/authorization of outside requests, and pick the page to be rendered or API endpoint to be invoked. When a page requested, the related template is loaded, and data for pre-rendering is picked by calling related endpoint exposed by module service. When public API endpoint is picked, the matching service endpoint is called.
There are few services, including:
caching service (that service which deployed now at separate servers group, and what
updates service, which process the module services version switch and provides API to do so via some UI for admins.
modules services (one per modules). Each module exposes endpoints for providing preloaded pages data, endpoint to give list of pages routes to be registered, API endpoints implementations, and endpoint to list exposed API routes to be invoked through router service.
router service which process external requests and dispatch them across other services when appropriate using cached routes map, updated in case if one of internal services broadcasts pages map refresh event, e. g. updates service.
What is stopping me from starting to use kubernetes right away is the lack of knowledge about how to implement the following scenarios:
1) only 1 microservice must be exposed outside cluster, the "routing service".
2) reuse builtin services discovery etc to communicate with services within cluster like caching server.
3) Cluster's router service would be attached to cloud load balancer we already have as a backend.
In my opinion, you should have a look at NGINX Ingress Controller to build your routing scheme, more information you can find here and here.
EDIT In addition, you can try some other ingress controllers and among them Istio and Traefik definitely worth your attention as alternative solutions to NGINX Ingress Controller.
I have an application deployed to google cloud app engine (flex environment).
The application consists of two parts: FrontEnd (Angular) and BackEnd(Spring boot).
Each one of these applications is deployed to a different service under the same app engine.
Is there any way to apply a firewall rule to the BackEnd service to deny all requests except the ones coming from the FrontEnd service?
Note: I have many services under the same app engine, so I need to apply the rule only to one service so that other services will not be affected.
There is no way to do that currently as the App Engine firewall will affect all your services, dispatch.yaml will not prevent clients from accessing your project using [project_name].appspot.com and adding network in app.yaml settings will only have effect in context of that network.
One workaround could be to set a different project and allowing access there only from another Google Cloud Project. Otherwise you can set checking authentication on the background instances using service accounts.
We are running a Spring Boot-based environment with about 15 microservices and a Zuul edge gateway registered with Eureka. Currently, I have set up all microservices to call other microservices through the Zuul gateway (e.g. if serviceA needs to call serviceB, the URL configuration property would be serviceB.baseUrl=http://zuul.mydomain.com:7001 where zuul.mydomain.com is our development server on AWS with all other microservices running behind it). Zuul in turn proxies to the microservices via Eureka registry lookups.
One benefit of doing it this way is that for a developer working locally on his machine, he would just need to run his service and all other dependencies on other services are reachable through the Zuul gateway on AWS (and in our ecosystem, there's a lot of such cross-service dependencies).
Now, I would really love to leverage on the full potential of Eureka / Ribbon and make calls directly to a peer microservice via its service name and a #LoadBalanced RestTemplate but I find that this would impose quite a lot on the developer having to recreate an entire ecosystem on his machine. At a minimum, he would have to run Eureka, his own service and any other services that his service is dependent on. This makes the barriers to entry for development unnecessarily high.
I did consider making the developer's local instance register to our Eureka service on AWS, but the problem is that all services on AWS are registered using the EC2 instance's private IP which is basically unreachable from the developer's machine. If I force the service to register using its public IP, it would mean that I have to use up more of our ElasticIP allocation for each service or change the IP every time the EC2 instance gets rebooted.
I could run a local Eureka + microservices environment in the local network but that means I need to create one such environment for every office we operate out of and that just means more overheads. In addition to that problem, it would probably mean that developer A may be calling developer B's half-done-not-quite-there-yet version of a dependency service which just confuses the heck out of everyone if a problem occurs (the services that get deployed to our AWS environment at least goes through a code review first before being deployed).
If there's anybody who has figured out a way to simplify a developer's setup while being able to leverage on the peer-to-peer service invocation possibilities of Feign / #LoadBalanced RestTemplate clients, I would love to get some pointers in the right direction.
Have confirmed that I can accomplish what I want (which is to be able to do local development on Ribbon enabled RestTemplate clients without having to run Eureka) by:
Force Ribbon to not use Eureka using the following property: ribbon.eureka.enabled=false
Manually provide Ribbon with the servers to point to using the following sample property: servicename.ribbon.listOfServers=test.service.com:8080
Big banks follow Service Oriented Architecture for their functioning.
They may have more that 50 services so do they run individual services as separate server or they group services ?
Can anyone explain this in detail ?
According to SOA, each service should be able to serve the service independently and hence could be hosted on a separate server.
But all this servers should communicate between each other internally,so the system as a whole is aware of the services offered by each server and the outside world can hit a single endpoint while requesting a service. Internally a routing module will identify the server which offers the particular service and serve it.
It could be also possible that there could be more than one server serving the same request if the load expected is high.
Also the term server could mean a runtime, something like a JVM if the service is Java based or it could be a machine too.
according to wiki:
Every computer can run any number of services, and each service is
built in a way that ensures that the service can exchange information
with any other service in the network without human interaction and
without the need to make changes to the underlying program itself.
Normally similar nature services or services communicating with same code base or DB server or another application server etc are grouped together. While a high volume service or long running service could be served separately to speed up the system.
The whole purpose of Service Oriented Architecture (SOA) is to have flexibility for each module( exposed as service) to have it's freedom of deployment, implementation an expansion with least affecting other modules. So all these services either they could be hosted on single server with different ports or they could be on different server.
Usually at big banks, there would be a team owning each service.Each service is deployed on different server. In fact each service may be deployed on many servers for Scalability and fault tolerance.
Usually the services are hosted by an Enterprise Service Bus, a component which publishes the services to all information systems of the organization and also to the external B2B customers (via B2B gateway).
The services hosted on ESB may utilize services provided by backend systems. These services of backend system are considered as private and are only consumed via ESB. This approach eliminates the spaghetti mess which comes if everybody integrates with anybody.
Most of ESB systems I have come accross were high available solutions with a HA database, a cluster of application servers and a load balancer, all together creating a platform to ensure stability and performance of the services.
The number of services in an enterprise can be very large, I have been involved in projects with hundreds of services, largest corporations can run thousands of services.
I recommend to check wikipedia for more about ESB.
I currently have a number of web services on a single server.
In the future we want to move the load off a single server and split it to other servers.
I am familiar with the concept of scaling out but in our case I want to have different web services on different web servers so that traffic can be routed to the correct web services. So web services that do a lot more intensive work can be dedicated to a specific server.
How would I do this?
Would I need to change my client applications so that the correct webservice is called on the correct web server?
I think the proper pattern to use here would be tho have one server with a dispatcher that will just forward requests to appropriate back-end services. Then if you decide to move one of back-end services to another server, then you can just make configuration change in the dispatcher.
I am sure you can do it programmatically, but I am sure software or hardware (like F5) load balancer have ability to configure it out.