Consuming a webservice with jsessionid in URL - web-services

I`m working on a SAP project, where i have to call a non-sap service with jsessionid in binding url. I already generated a proxy class out of the wsdl and defined a logical port with my URL. In my case it should be dynamic like: {host}/service/foo/binding;jsessionid={xxx} but its static like: {host}/service/foo/binding
How can i achieve that session handling?
EDIT: The problem here is, its not only for authentification its also for load balancing. The jsessionid MUST be submitted via URL rewriting. Any ideas?

You should be able to configure this with the soamanager transaction:
Go to the service configuration screen and select your consumer proxy
Edit the existing, or create a new logical port
Go to the transport settings tab and change the URL access path
Once saved, you can find the logical port as a destination in transaction SM59. It's one of the generated ones in the external HTTP connections tree.
Providing a value for the parameter will probably require a modification of the SAP software though. The system uses the cl_http_client=>create_by_destination method to obtain a client object to perform the http call, so maybe you can implement some custom code there.

Related

Envoy access logs format validation

We are using envoy access logs
https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage , does envoy validate the fields that are passed to the access logs, e.g. the field format.
I ask it from basic security reason to verify that if I use for example %REQ(:METHOD) I will get a real http method like get post etc and not something like foo. or [%START_TIME%] is in time format and I will not get something else...
I think it's related to this envoy code
https://github.com/envoyproxy/envoy/blob/24bfe51fc0953f47ba7547f02442254b6744bed6/source/common/access_log/access_log_impl.cc#L54
I ask it since we are sending the data from the access logs to another system and we want to verify that the data is as its defined in the access logs and no one will change it from security perspective.
like ip is real ip format and path is in path format and url is in url format
I'm not sure I understand the question. Envoy doesn't have to validate anything as it is generating those logs. Envoy is HTTP proxy who receives the request and does some routing/rewriting/auth/drop/.. actions based on the configuration (configured by virtualservice / destinationrule / envoyfilter if we're talking about istio). After the action it generates the log entry and fills the fields with details about original request and actions taken.
Also there is nothing like 'real' http method. HTTP method is just a string and it can hold any value. Envoy is just the proxy who sits between client and application and passes the requests (unless you explicitly configure it i.e. drop some method).
It depends on application who receives the method how it's treated. GET/POST/HEAD are commonly associated with standard HTTP and static pages. PUT/DELETE/PATCH are used in REST APIs. But nothing prevents you to develop application who will accept 'FOOBAR' method and runs some code over it.

Configuring WSO2 IS behind a reverse proxy at some context

I am trying to set up WSO2 Identity Server behind a reverse proxy for SSL offloading. For example, let's say if WSO2 IS is available at say https://<some-ip>:9443/, I am trying to put it behind reverse proxy with address such as https://<domain name>/is/. Note the context path /is and SSL port 443. I thought that this will be trivial enough but sadly I am unable to find any conclusive documentation for achieving the same.
My applications are using OIDC to connect to WSO2 IS and using Azure Application Gateway as reverse proxy - typically all API calls works well but neither of UI (or flows involving redirections) works due to context. I can also fix redirects by URL rewriting at reverse proxy but that still doesn't solve problems. For example, login page will appear but XHR call from the same will go to /logincontext instead of /is/logincontext. Where can I set up the proxy context path in WSO2 IS? I already tried setting the same in .toml file (equivalent of setting it in carbon.xml) but it seems to be affecting only Management Portal.
WSo2 IS documentation talks about setting it up behind ngnix but that documentation is not using any path context. I could find reverse proxy documentation for other WSO2 product such as WSO2 API Manager but it only involves updating carbon.xml and that doesn't work for WSO2 IS. I am not a java person and hence, finding it difficult to figure out web app organization of WSO2.
Any help/link to documentation/guide to set up with proxy context will be useful.
I know that this answer comes a little bit late but recently I had a similar issue and here it is how I made it work, maybe it could be helpful for someone. I was using WSO2 IS 5.11.0.
Note:
I checked similar questions on stackoverflow and found a few but none was enough by itself for my case.
Maybe the solution I came up with is not the best or the most correct but it is the only one I could make work.
Here's how I did, assuming the context path is is:
Open Carbon Management Console and go to Identity Providers -> Resident. Then, go to Inbound Authentication Configuration -> OAuth2/OpenID Connect Configuration. Here, change the hostname under Identity Provider Entity ID to https://domain_name:443/is/<remaining path>.
Make sure that the port number is present or absent both here and in the client application. If there is a mismatch between the two, for some reason, it won't work (or at least it didn't for me).
Open the file deployment.toml and modify it as follows:
under the [server] section, add your proxy context at the end of the base_path url, e.g. base_path = "https://$ref{server.hostname}:${carbon.management.port}/is";
of course, also add proxy_context_path = "is" (actually, this last line should be enough but for some reason in my case it wasn't, so I had to modify the base path too);
under [transport.https.properties] add proxyPort="443".
For the record, I also turned off compression, by adding:
[transport.http.properties]
compression="off"
[transport.https.properties]
...
compression="off"
and set the token issuer URL equal to the entity id set up in Carbon, with:
[oauth]
use_entityid_as_issuer_in_oidc_discovery = true
but found out that these last two steps (turning off compression and setting the entity id as issuer) weren't needed.
Disable the csrf guard by setting org.owasp.csrfguard.Enabled = false
in the file /repository/resources/conf/templates/repository/conf/security/Owasp.CsrfGuard.Carbon.properties.j2.
This step was necessary for me to avoid the 403 Error after logging in on the Carbon Console (turning off compression didn't work).
Lastly, if you use nginx as reverse proxy (as I did), add these two lines in the location used for wso2:
proxy_redirect https://domain_name/oauth2/ https://domain_name/is/oauth2/;
proxy_redirect https://domain_name/carbon/ https://domain_name/is/carbon/;
These are needed (or at least were for me) because some URLs are not under the context path. In particular, the last one allows you to open the Carbon Console at https://domain_name/is/carbon/.
References:
wso2 api manger carbon page gives 403 Forbidden
WSO2 Identity Server login returns a 403
WSO2 Identity Server port configuration
To understand the template-based configuration model adopted from version 5.9.0 onwards, see:
https://apim.docs.wso2.com/en/latest/reference/understanding-the-new-configuration-model/
https://mcvidanagama.medium.com/understand-wso2-api-managers-new-configuration-model-6425a2710faa
Here are some useful configuration mappings from the old xml to the new toml based model:
https://github.com/ayshsandu/samples/tree/master/config-mapping

HTTP 407 Proxy Authentication Required while accessing Amazon S3

I have tried everything but I cant seem to fix this issue that is happening for only one client behind a corporate proxy/firewall. Our Silverlight application connects to Amazon S3 for downloading/Uploading some documents. On one client and one client only it returns a 407 error and after that the application fails to save anything.
Inner Exception:
System.ServiceModel.ProtocolException: [UnexpectedHttpResponseCode]
Arguments: 407,Proxy Authentication Required
We had something similar at a different client but there was more of a CORS issue. to resolve this I used cloud-front to fake a sub-domain that then accesses the S3 bucket and it solved the issue. I was hoping it would fix it with this client as well but it didnt.
I have tried adding this code to web.config as suggested by a lot of answers
<system.net>
<defaultProxy useDefaultCredentials="true" >
</defaultProxy>
</system.net>
I have read articles about passing a proxy headers with basis authentication using username and password but I am not sure how this would help us. The Proxy server is used by client and any authentication it requires is outside our domain.
**Additional Information**
The Silverlight code references 2 services. One is our wcf service that retrieves all the data for the application. One is The Amazon S3 service that uses the amazon Soap api, the endpoint for which is at http://s3.amazonaws.com/doc/2006-03-01/AmazonS3.wsdl?
If I go into our app and only use part of the system that dont make any calls to the Amazon S3 api the application works fine. As soon as I go to a part of the system that makes a call to the S3, the problem starts. funny enough the call to S3 goes fine and I can retrieve the doc fine but then any calls to our wcf service return 407.
Any ideas?
**Update 2**
Based on comments from Elliot Nelson I check the stack we were using for making http requests in our application. Turns out we are using client http for both http and https requests by default. Here is the code we have in the App.xaml constructor
public App()
{
Startup += Application_Startup;
UnhandledException += Application_UnhandledException;
InitializeComponent();
WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp);
WebRequest.RegisterPrefix("https://", WebRequestCreator.ClientHttp);
}
Now, to understand the differences between clienthttp and browserhttp and when to use them. Also, the potential impacts/issues of switching to browserhttp.
**Update 3**
Is there a way to request browsers to run your in-browser Silverlight application in trusted mode and would it help bypass this issue?
(Answer #2)
So, most likely (for corporate environments like this network), almost nothing can be done without whatever custom proxy settings are set in IE, usually pushed by corporate policy. To take advantage of these proxy settings, you want to use WebRequestCreator.BrowserHttp, which automatically uses the browser's default settings when making requests.
There's a table of the differences between these two clients available in the Microsoft docs. I'm guessing you were using something (maybe setting custom headers or reading the raw response body) that wasn't supported in BrowserHttp.
For security reasons, you can't "ask" the browser what its proxy settings are and use them, so this is a tricky situation. You can specify Browser vs Client handling by domain, or even for a specific request (the same page above describes how); you may be able in this case to get away with just using ClientHttp for your service calls and BrowserHttp for your S3 calls, and avoid the problem altogether!
For next steps, I'd try that approach; if it doesn't work, I'd try switching wholesale to BrowserHttp just to see if it bypasses the proxy issue (there's almost no chance the application will actually work, since you're probably using ClientHttp-only options).
Long term, you may want to consider making changes to your services so they are usable by a BrowserHttp-only application (this would require you to be pretty basic in your requests/responses, but using only BrowserHttp would be a guarantee you'd work in pretty much any corp network).
Running in trusted mode is probably a group policy thing which would require their AD admins to approve / whitelist your app.
I think the underlying issue you are facing is that the proxy requires NTLM authentication and for whatever reason the browser declines to provide your app with that context.
One way to prove that it's an NTLM auth issue is to test with curl - get it to make a req through the proxy, then it should be a bit easier to code to. EG the following curl will get you through 99% of Windows corporate proxies (assuming the proxy is at proxy-host.corp:3128):
C:\> curl.exe -v --proxy proxy-host:3128 --proxy-user : --proxy-ntlm https://www.google.com
NOTE The --proxy-user : tells curl to use the current user session to perform the NTLM challenge.
So if you can get the client to run that, you can at least identify that NTLM works, then it's a just a matter of getting the app to perform the NTLM challenge using the default credentials (which may or may not be provided by the browser session)
Since you described this as a silverlight application, I'm going to assume you can't use classic browser-proxy troubleshooting like "move browser to public network" or "try a different browser", to isolate the problem.
You should try to isolate the proxy server, and have the customer use the required proxy-auth.
The application is making request, but it might be intercepted by a transparent proxy, or the result might be coming from what you consider a web server.
In the early days, the 401 error was pretty strictly associated with web-auth, and 407 was for proxy-auth.
Architecturally, the separation is a convenience, a web server can have both web server, proxy, and reverse-proxy behaviors.
What happens is your customer's environment is making a web connection to the destination, but it receives a HTTP 407 status from some host, probably their network, or sometimes the provider. Almost certainly the request is received not forwarded. The HTTP client your application lives in needs to provide the credentials that host requires. Companies have environments that are complex enough where often your customer will say this is the first time they have heard of this (some proxy-auth is also dynamic or destination specific).
Also, in some corporate environments, the operator will allow temporary or permanent white-listing from the proxy-auth service. You should see if they can do this, even temporarily, to confirm there aren't going to be other problems.
In the end, it sounds like your application might not robustly support proxy-auth, or the proxy-auth type they use in their environment.

How to tell AWS application load balancer to not forward the path pattern?

I have configured my AWS application load balancer to have the following rules:
/images/* forward to server A (https://servera.com)
/videos/* forward to server B (https://serverb.com)
And this is correctly forwarding to the respective servers. However, I don't want the load balancer to forward the request as https://servera.com/images & https://serverb.com/videos. I just want the respective servers to be hit without the path pattern as https://servera.com & https://serverb.com (without the path patterns in the request).
I don't want to modify my request parameters or change my server side code for this. Is there a way I can tell the application load balancer to not forward the path patterns?
Is there a way I can tell the application load balancer to not forward the path patterns?
No, there isn't. It's using the pattern to match the request, but it doesn't modify the request.
I don't want to modify my request parameters or change my server side code for this.
You'll have to change something.
You shouldn't have to change your actual code. If you really need this behavior, you should be able to accomplish it using the web server configuration -- an internal path rewrite before the request is handed off to the application by the web server should be a relatively trivial reconfigurarion in Nginx, Apache, HAProxy, or whatever is actually listening on the instances.
Also, it seems to me that you are making things difficult on yourself by wanting the server to respond to a path different than what is requested by the browser. Such a configuration will tend to make it more difficult to ensure correct test results and correct handling of relative and absolute paths, since the applications will have an inaccurate internal representation of what the browser is requesting or will need to request.

SOAP request going to localhost

I'm testing a SOAP web service using Soap UI.
The WSDL location is on an external server, so I load and create a new project like this:
However, when I try to create a request, the endpoint points to localhost:
I have to manually insert the right endpoint URL for the request to work.
What could be behind this?
If you open the wsdl file in a browser or in a text editor, it will be clear that the endpoint has the same url, i.e., localhost, but not the actual one. That is the reason you are experiencing that.
This mostly happens also as developers try to use localhost while developing and it is not required/no point/does not make sense to have their hostname in wsdl unless it is a public wsdl. If QA team is using, then they will set their hostname in the endpoint, similarly for the other environments. So, you really do not have to worry about it, I believe.
In order to set the right endpoint at one place and apply for all requests( instead of changing it for each and every request), do the following:
Go to the service interface, right click and show interface viewer.
Click on Service Endpoints tab.
Click on + button.
Add the actual endpoint what you wanted
Click on Assign button, and select All requests and Test Requests from dropdown and say ok
Repeat the same if you have multiple interfaces in your project.
Save the project, and you are done.
You should be able to see the desired endpoint for all the existing test requests and even for the new one that you are going to create later.