Splitting up web services on different dedicated web servers - web-services

I currently have a number of web services on a single server.
In the future we want to move the load off a single server and split it to other servers.
I am familiar with the concept of scaling out but in our case I want to have different web services on different web servers so that traffic can be routed to the correct web services. So web services that do a lot more intensive work can be dedicated to a specific server.
How would I do this?
Would I need to change my client applications so that the correct webservice is called on the correct web server?

I think the proper pattern to use here would be tho have one server with a dispatcher that will just forward requests to appropriate back-end services. Then if you decide to move one of back-end services to another server, then you can just make configuration change in the dispatcher.
I am sure you can do it programmatically, but I am sure software or hardware (like F5) load balancer have ability to configure it out.

Related

Expose SOAP Service from SAP?

I've created a SOAP Service in ABAP, which perfectly works inside the network.
Now I wan't it to be called from outside and I haven't really found any tutorial.
Most likely a SAP Web Dispatcher or a reverse proxy is required, but how to use them?
Or is there an easier way to make the endpoint "public" and callable from the "outside"?
Making it public not part of SAP system. You need to configure your network to allow incoming request. Generally you need to configure your firewall. You need to open a port on firewall and redirect it to your SAP server http/https port. It will also create a risk for opening http/https port to outside. You must sure about limit your your web service user authorizations and changing all default passwords and using update date SAP system for security patchs.
For more get security I prefer to use a proxy server like nginx/apache to just serve your SOAP service over it.
Usually it is done thourgh reverse-proxies, to minimize risk of attacks from public Internet.
The general schema looks the same, although there are multiple variations depending on the company
The oldest and the most traditional reverse-proxy for SAP systems is a Web Dispatcher
SAP Web Dispatcher it includes load balancing and HTTP filtering
https://informatik.rub.de/wp-content/uploads/2021/11/2_sap-secure-configuration.pdf
https://wiki.scn.sap.com/wiki/display/SI/FAQ+Web+Dispatcher
https://blogs.sap.com/2021/05/09/landscape-architecture-sap-web-dispatcher-deployment/
SAP Gateway is a framework for exposing functionality as REST/SOAP web-services
https://blogs.sap.com/2018/04/15/sap-odata-service-get-consume-rest-service/
The tutorial for configuring SAP Web Dispatcher + SAP Gateway together
https://help.sap.com/saphelp_uiaddon10/helpdata/en/ec/342f1809c94d2b817ba772fe69e43f/content.htm?no_cache=true
The other options for reverse-proxy for SAP:
nginx
Apache
...
You are free to choose any reverse proxy on the market depending on your environment.

In SOA (Service Oriented Architecture) does individual services run as separate server?

Big banks follow Service Oriented Architecture for their functioning.
They may have more that 50 services so do they run individual services as separate server or they group services ?
Can anyone explain this in detail ?
According to SOA, each service should be able to serve the service independently and hence could be hosted on a separate server.
But all this servers should communicate between each other internally,so the system as a whole is aware of the services offered by each server and the outside world can hit a single endpoint while requesting a service. Internally a routing module will identify the server which offers the particular service and serve it.
It could be also possible that there could be more than one server serving the same request if the load expected is high.
Also the term server could mean a runtime, something like a JVM if the service is Java based or it could be a machine too.
according to wiki:
Every computer can run any number of services, and each service is
built in a way that ensures that the service can exchange information
with any other service in the network without human interaction and
without the need to make changes to the underlying program itself.
Normally similar nature services or services communicating with same code base or DB server or another application server etc are grouped together. While a high volume service or long running service could be served separately to speed up the system.
The whole purpose of Service Oriented Architecture (SOA) is to have flexibility for each module( exposed as service) to have it's freedom of deployment, implementation an expansion with least affecting other modules. So all these services either they could be hosted on single server with different ports or they could be on different server.
Usually at big banks, there would be a team owning each service.Each service is deployed on different server. In fact each service may be deployed on many servers for Scalability and fault tolerance.
Usually the services are hosted by an Enterprise Service Bus, a component which publishes the services to all information systems of the organization and also to the external B2B customers (via B2B gateway).
The services hosted on ESB may utilize services provided by backend systems. These services of backend system are considered as private and are only consumed via ESB. This approach eliminates the spaghetti mess which comes if everybody integrates with anybody.
Most of ESB systems I have come accross were high available solutions with a HA database, a cluster of application servers and a load balancer, all together creating a platform to ensure stability and performance of the services.
The number of services in an enterprise can be very large, I have been involved in projects with hundreds of services, largest corporations can run thousands of services.
I recommend to check wikipedia for more about ESB.

How to implement service as app in DEA?

I am trying to create a clustered cache service for Cloud Foundry. I understand that I need to implement Service Broker API. However, I want this service to be clustered, and in the Cloud Foundry environment. As you know, container to container connection (TCP) is not supported yet, I don't want to host my backend in another environment.
Basically my question is almost same as this one: http://grokbase.com/t/cloudfoundry.org/vcap-dev/142mvn6y2f/distributed-caches-how-to-make-it-work-multicast
And I am trying to achieve this solution he adviced:
B) is to create a CF Service by implementing the Service Broker API as
some of the examples show at the bottom of this doc page [1] .
services have no inherant network restrictions. so you could have a CF
Caching Service that uses multicast in the cluster, then you would
have local cache clients on your apps that could connect to this
cluster using outbound protocols like TCP.
First of all, where does this service live? In the DEA? Will backend implementation be in the broker itself? How can I implement the backend for scaling the cluster, start the same service broker over again?
Second and another really important question is, how do the other services work if TCP connection is not allowed for apps? For example, how does a MySQL service communicates with the app?
There are a few different ways to solve this, the more robust the solution, the more complicated.
The simplest solution is to have a fixed number of backend cache servers, each with their own distinct route, and let your client applications implement (HTTP) multicast to these routes at the application layer. If you want the backend cache servers to run as CF applications, then for now, all solutions will require something to perform the HTTP multicast logic at the application layer.
The next step would be to introduce an intermediate service broker, so that your client apps can all just bind to the one service to get the list of routes of the backend cache servers. So you would deploy the backends, then deploy your service broker API instances with the knowledge of the backends, and then when client apps bind they will get this information in the user-provided service metadata.
What happens when you want to scale the backends up or down? You can then get more sophisticated, where the backends are basically registering themselves with some sort of central metadata/config/discovery service, and your client apps bind to this service and can periodically query it for live updates of the cache server list.
You could alternatively move the multicast logic into a single (clustered) service, so:
backend caches register with the config/metadata/discovery service
multicaster periodically queries the discovery service for list of cache server routes
client apps make requests to the multicaster service
One difficulty is in implementing the metadata service if you're doing it yourself. If you want it clustered, you need to implement a highly-available-ish consistent-ish datastore, it's almost the original problem you're solving except the service handles replicating data to all nodes in the cluster, so you don't have to multicast.
You can look at https://github.com/cloudfoundry-samples/github-service-broker-ruby for an example service broker that runs as a CF application.

Clustering webservices over VPN

We have number of web services exposed over VPN to our partners for their consumption. I was wondering, what would be the best way to make those web services highly available and scalabe for their usage. One option could be an apache sitting between our web services acting like a reverse proxy. But, that would introduce a single point of failure too. Can we use physical load balancer? I was not able to find any useful resources for planning out this activity. Any thoughts/ideas?
I did not work with physical load balancer, but Apache is a valid solution in most of the scenarios.
All of our clients (with critical back-end system) uses apache as a load balancer without problemas.
Most of the Application Servers also provide their custom integration with apache, like mod_jk for Weblogic or mod_cluster for JBoss.

Accessing Windows Network Share from Web Service Securely

We have developed a RESTful Web Service which requires access to a Network share in order to read and write files. This is a public facing Web Service (running over SSL) which requires staff to log on using an assigned user name and password.
This web service will be running in a DMZ. It doesn't seem "right" to access a Network Share from a DMZ. I would venture a guess that the "secure" way to do this would be to provide another service inside the domain which only talks to our Web Service. That way, if anyone wanted to exploit it, they would have to find a way to do it via the Web Service, not through known system API's.
Is my solution "correct"? Is there a better way?
Notes:
the Web Service does not run under IIS.
the Web Service currently runs under an account with access to the Network Share and access to a SQL database.
the Web Service is intended only for designated staff, not the public.
I'm a developer, not an IT professional.
What about some kind of vpn to use the internal ressources? There are some pretty solutions for this, and opening network shares to the internet seems too big a risk to do.
That aside, when an attacker breaks into your DMZ host using those webservices, he can break into your internal server using the same API unless you can afford to create two complete different solutions.
When accessing the fileservers from the DMZ directly, you would limit theses connections using a firewall so even after breaking your DMZ Host the attacker cannot do "everything" but only read (write?) to those servers.
I would suggest #2