I've created a SOAP Service in ABAP, which perfectly works inside the network.
Now I wan't it to be called from outside and I haven't really found any tutorial.
Most likely a SAP Web Dispatcher or a reverse proxy is required, but how to use them?
Or is there an easier way to make the endpoint "public" and callable from the "outside"?
Making it public not part of SAP system. You need to configure your network to allow incoming request. Generally you need to configure your firewall. You need to open a port on firewall and redirect it to your SAP server http/https port. It will also create a risk for opening http/https port to outside. You must sure about limit your your web service user authorizations and changing all default passwords and using update date SAP system for security patchs.
For more get security I prefer to use a proxy server like nginx/apache to just serve your SOAP service over it.
Usually it is done thourgh reverse-proxies, to minimize risk of attacks from public Internet.
The general schema looks the same, although there are multiple variations depending on the company
The oldest and the most traditional reverse-proxy for SAP systems is a Web Dispatcher
SAP Web Dispatcher it includes load balancing and HTTP filtering
https://informatik.rub.de/wp-content/uploads/2021/11/2_sap-secure-configuration.pdf
https://wiki.scn.sap.com/wiki/display/SI/FAQ+Web+Dispatcher
https://blogs.sap.com/2021/05/09/landscape-architecture-sap-web-dispatcher-deployment/
SAP Gateway is a framework for exposing functionality as REST/SOAP web-services
https://blogs.sap.com/2018/04/15/sap-odata-service-get-consume-rest-service/
The tutorial for configuring SAP Web Dispatcher + SAP Gateway together
https://help.sap.com/saphelp_uiaddon10/helpdata/en/ec/342f1809c94d2b817ba772fe69e43f/content.htm?no_cache=true
The other options for reverse-proxy for SAP:
nginx
Apache
...
You are free to choose any reverse proxy on the market depending on your environment.
Related
Anyone can tell me what kind of service fits on this use case below:
I want to expose a public IP that receive HTTPS/HTTP requests and forward the traffic to my services I have in on-prem.
Looking for Azure, AWS, etc, etc, are there some service that serve to my problem?
Regards...
If you are using using Azure and you want HTTPS based request to be sent to your backend APIs (which can be on prem or on any cloud) you can check for Azure API Management (APIM).
You can use the APIM with or without VNET.
APIM can be used in External Mode if you want to integrate a VNET to perform data plane operations which will expose a Public IP as well as a Gateway URL which you can be used to send HTTPS traffic.
Reference:
https://learn.microsoft.com/en-us/azure/api-management/api-management-using-with-vnet?tabs=stv2
https://learn.microsoft.com/en-us/azure/api-management/api-management-key-concepts#scenarios
Additionally, you can also check out Application Gateway
Reference:
https://learn.microsoft.com/en-us/azure/architecture/example-scenario/gateway/firewall-application-gateway
I was hoping someone may be able to explain how I would setup a multi-tiered web application. There is a database tier, app tier, web server tier and then the client tier. I'm not exactly sure how to separate the app tier and web server tier since the app tier will be in a private subnet. I would have the client send the request directly to the app server but the private net is a requirement. And having the app server separated from the web server is a requirement as well.
The only idea I have had was to serve the content on the web server and then the client will send all requests to the same web server on another port. Like port 3000, if a request is captured on that port, a node app using express will forward the request to the app tier since the web server can speak to the app server.
I did setup a small proof of concept doing this. The web server serves the content, then I have another express app setup to listen on port 3000, the client sends the request on port 3000 and then it just sends the exact same thing back to the app server.
This is my current setup with the web servers hosting two servers. One to serve the frontend on port 80 and one to receive requests on port 3000. The server listening on port 3000 forwards all requests to the app server ALB(It's basically a copy of all the same routes on the app server but it just forwards the requests instead of performing an action). But is there a way to not have this extra hop in the middle? Get rid of the additional server that is listening on 3000 without exposing the internal ALB?
To separate your web servers and application servers, you can use a VPC with public and private subnets. In fact, this is such a common scenario that Amazon has already provided us with documentation.
As for a "better way to do this," I assume you mean security. Here are some options:
You can (and should) run host based firewalls such as IP tables on your hosts.
AWS also provides a variety of options.
You can use Security Groups, which are statefull firewalls for your hosts
You can also use Network Access Control Lists (ACLs), which are stateless firewalls used to control traffic in and out of subnets.
AWS would also argue that many shops can improve their security posture by using managed services, so that all of the patching and maintenance handled by AWS. For example, static content could be hosted on Amazon S3, with dynamic content provided by microservices leveraging API Gateway. Finally, from a security perspective AWS provides services like Trusted Advisor, which can help you find and fix common security misconfigurations.
I am trying to create a clustered cache service for Cloud Foundry. I understand that I need to implement Service Broker API. However, I want this service to be clustered, and in the Cloud Foundry environment. As you know, container to container connection (TCP) is not supported yet, I don't want to host my backend in another environment.
Basically my question is almost same as this one: http://grokbase.com/t/cloudfoundry.org/vcap-dev/142mvn6y2f/distributed-caches-how-to-make-it-work-multicast
And I am trying to achieve this solution he adviced:
B) is to create a CF Service by implementing the Service Broker API as
some of the examples show at the bottom of this doc page [1] .
services have no inherant network restrictions. so you could have a CF
Caching Service that uses multicast in the cluster, then you would
have local cache clients on your apps that could connect to this
cluster using outbound protocols like TCP.
First of all, where does this service live? In the DEA? Will backend implementation be in the broker itself? How can I implement the backend for scaling the cluster, start the same service broker over again?
Second and another really important question is, how do the other services work if TCP connection is not allowed for apps? For example, how does a MySQL service communicates with the app?
There are a few different ways to solve this, the more robust the solution, the more complicated.
The simplest solution is to have a fixed number of backend cache servers, each with their own distinct route, and let your client applications implement (HTTP) multicast to these routes at the application layer. If you want the backend cache servers to run as CF applications, then for now, all solutions will require something to perform the HTTP multicast logic at the application layer.
The next step would be to introduce an intermediate service broker, so that your client apps can all just bind to the one service to get the list of routes of the backend cache servers. So you would deploy the backends, then deploy your service broker API instances with the knowledge of the backends, and then when client apps bind they will get this information in the user-provided service metadata.
What happens when you want to scale the backends up or down? You can then get more sophisticated, where the backends are basically registering themselves with some sort of central metadata/config/discovery service, and your client apps bind to this service and can periodically query it for live updates of the cache server list.
You could alternatively move the multicast logic into a single (clustered) service, so:
backend caches register with the config/metadata/discovery service
multicaster periodically queries the discovery service for list of cache server routes
client apps make requests to the multicaster service
One difficulty is in implementing the metadata service if you're doing it yourself. If you want it clustered, you need to implement a highly-available-ish consistent-ish datastore, it's almost the original problem you're solving except the service handles replicating data to all nodes in the cluster, so you don't have to multicast.
You can look at https://github.com/cloudfoundry-samples/github-service-broker-ruby for an example service broker that runs as a CF application.
We have developed a RESTful Web Service which requires access to a Network share in order to read and write files. This is a public facing Web Service (running over SSL) which requires staff to log on using an assigned user name and password.
This web service will be running in a DMZ. It doesn't seem "right" to access a Network Share from a DMZ. I would venture a guess that the "secure" way to do this would be to provide another service inside the domain which only talks to our Web Service. That way, if anyone wanted to exploit it, they would have to find a way to do it via the Web Service, not through known system API's.
Is my solution "correct"? Is there a better way?
Notes:
the Web Service does not run under IIS.
the Web Service currently runs under an account with access to the Network Share and access to a SQL database.
the Web Service is intended only for designated staff, not the public.
I'm a developer, not an IT professional.
What about some kind of vpn to use the internal ressources? There are some pretty solutions for this, and opening network shares to the internet seems too big a risk to do.
That aside, when an attacker breaks into your DMZ host using those webservices, he can break into your internal server using the same API unless you can afford to create two complete different solutions.
When accessing the fileservers from the DMZ directly, you would limit theses connections using a firewall so even after breaking your DMZ Host the attacker cannot do "everything" but only read (write?) to those servers.
I would suggest #2
I currently have a number of web services on a single server.
In the future we want to move the load off a single server and split it to other servers.
I am familiar with the concept of scaling out but in our case I want to have different web services on different web servers so that traffic can be routed to the correct web services. So web services that do a lot more intensive work can be dedicated to a specific server.
How would I do this?
Would I need to change my client applications so that the correct webservice is called on the correct web server?
I think the proper pattern to use here would be tho have one server with a dispatcher that will just forward requests to appropriate back-end services. Then if you decide to move one of back-end services to another server, then you can just make configuration change in the dispatcher.
I am sure you can do it programmatically, but I am sure software or hardware (like F5) load balancer have ability to configure it out.