Using AWS Java SDK 2.0 WebIdentityTokenFileCredentialsProvider gives SdkClientException - amazon-web-services

I have an application that already works using Kinesis. The application uses AWS Session Credentials but we are switching to using either AWS Session Credentials or Web Identity Token (software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider) depending on the deployment environment.
When I add in the code to use WebIdentityTokenFileCredentialsProvider I get the stacktrace below. I can't provide the code but rest assured I'm setting an HTTP client for Kinesis. But if you look at the stacktrace it shows that a default HTTP client is being configured via the Provider deep within the AWS SDK code. I have no influence over the Credentials Provider setting the HTTP client as the WebIdentityTokenFileCredentialsProvider doesn't give me a way to tell it that I don't need a default HTTP client being set.
I know one option is to create my own implementation of the WebIdentityTokenFileCredentialsProvider but I'd rather not do that.
Question: What else can I do to work around this?
Caused by: software.amazon.awssdk.core.exception.SdkClientException: Multiple HTTP implementations were found on the classpath. To avoid non-deterministic loading implementations, please explicitly provide an HTTP client via the client builders, set the software.amazon.awssdk.http.service.impl system property with the FQCN of the HTTP service to use as the default, or remove all but one HTTP implementation from the classpath
at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:102)
at software.amazon.awssdk.core.internal.http.loader.ClasspathSdkHttpServiceProvider.loadService(ClasspathSdkHttpServiceProvider.java:62)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.Spliterators$ArraySpliterator.tryAdvance(Spliterators.java:1002)
at java.base/java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:129)
at java.base/java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:527)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:513)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:150)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.findFirst(ReferencePipeline.java:647)
at software.amazon.awssdk.core.internal.http.loader.SdkHttpServiceProviderChain.loadService(SdkHttpServiceProviderChain.java:44)
at software.amazon.awssdk.core.internal.http.loader.CachingSdkHttpServiceProvider.loadService(CachingSdkHttpServiceProvider.java:46)
at software.amazon.awssdk.core.internal.http.loader.DefaultSdkHttpClientBuilder.buildWithDefaults(DefaultSdkHttpClientBuilder.java:40)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.lambda$resolveSyncHttpClient$7(SdkDefaultClientBuilder.java:343)
at java.base/java.util.Optional.orElseGet(Optional.java:364)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.resolveSyncHttpClient(SdkDefaultClientBuilder.java:343)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.finalizeSyncConfiguration(SdkDefaultClientBuilder.java:282)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.syncClientConfiguration(SdkDefaultClientBuilder.java:178)
at software.amazon.awssdk.services.sts.DefaultStsClientBuilder.buildClient(DefaultStsClientBuilder.java:27)
at software.amazon.awssdk.services.sts.DefaultStsClientBuilder.buildClient(DefaultStsClientBuilder.java:22)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.build(SdkDefaultClientBuilder.java:145)
at software.amazon.awssdk.services.sts.internal.StsWebIdentityCredentialsProviderFactory$StsWebIdentityCredentialsProvider.<init>(StsWebIdentityCredentialsProviderFactory.java:71)
at software.amazon.awssdk.services.sts.internal.StsWebIdentityCredentialsProviderFactory$StsWebIdentityCredentialsProvider.<init>(StsWebIdentityCredentialsProviderFactory.java:55)
at software.amazon.awssdk.services.sts.internal.StsWebIdentityCredentialsProviderFactory.create(StsWebIdentityCredentialsProviderFactory.java:47)
at software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider.<init>(WebIdentityTokenFileCredentialsProvider.java:86)
at software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider.<init>(WebIdentityTokenFileCredentialsProvider.java:46)
at software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider$BuilderImpl.build(WebIdentityTokenFileCredentialsProvider.java:200)

Related

Using Windows Authentication with cpprestsdk?

Using WinHTTP right now, and looking to switch over to cpprestsdk. I'm looking through the documentation, and I don't see anything about support for NTLM/Negotiate/Kerberos support. Am I missing something? I find it hard to believe that MS wouldn't have supported it, but I don't see any sample code on how you would use it.
The reason we need NTLM/Negotiate/Kerberos support is that we are running our client via RemoteApp, and want our users to only have to login once with their Domain Credentials when starting the app, and not have users prompted to enter passwords a second time.
It seems that Windows authentication is readily build into Casablanca (when used on a Windows machine). Take a look at src/http/client/http_client_winhttp.cpp. There you'll find a function "ChooseAuthScheme". To my understanding this will select the "most secure" authentication scheme that a server supplies. If the server e. g. claims to support both, "BASIC" and "NEGOTIATE", it will prefer and select the latter as the more secure scheme. Thus using Widows authentication should a very easy to use, just don't set any credentials (username/pwd) and try to connect to a server that supports Windows authentication and also announces this in the http "Authenticate" header (otherwise Casablanca will of course not attempt to use Windows authentication).
BUT
I'm also trying to use Windows authentication in Casablanca and I'm currently facing two problems:
Windows authentication sort of works as explained above in my scenario. Bewilderingly however sometimes it does not. This goes as far as that I can connect to a server from machine A with user Foo logged in and I cannot connect from machine B with user Bar logged in at the the very same time and with all machines being in the very same network segment without any router or proxy in between. From a Fiddler log I can see that in case of a failure Casablanca attempts a connect without any authentication and then receives a http 403 unauthorized from the server (which is to be expected and perfectly fine) but afterwards fails to resend the request with the NEGOTIATE in the headers, it simply aborts. On the contrary in the successful case there is a resent with the credentials being a base64 encoded blob (that in fact has binary content and presumably only MS knows what it means). I'm currently unclear about what triggers this, why this happens and how to resolve this. It could well be a bug in Casablanca.
I have a special scenario with a server that has a mixed operation mode where users should be able to connect via BASIC or Windows authentication. Let's not reason about the rationale of the following scenario and best practices, I'm dealing with a TM1 database by IBM and it's just what they implemented. So the users admissible for basic and Windows authentication don't necessarily overlap, one group must authenticate via Windows integrated (let's say domain users) and the other one must use basic authentication (let's say external users). So far I found no way (without patching Casablanca) to clamp the SDK to some mode. If the server announces BASIC and NEGOTIATE it will always switch NEGOTIATE as the more secure mode, making basic authentication inaccessible and effectively locking out the BASIC group. So if you have a similar scenario, this could equally be a problem for you, ChooseAuthScheme() tests several different authentication methods, NEGOTIATE, NTLM, PASSPORT, DIGEST and finally BASIC, in this sequence and will stubbornly select the first one that's supported on both, client and server, discarding all other options.
Casablanca (CpprestSDK) fully supports NTLM authentication. If server rejects request with status code 401/403 and header WWW-Authenticate, library will handle it internally using most secure authentication method. In case of NTLM you could either specify login/password pair, or use automatic logon (Windows), based on calling thread current user token.
However, when I tried using auto-logon feature, it unexpectedly failed on some workstations (case 1 in Don Pedro answer).
Windows version of Cpprest uses WinHTTP internally. When you try to automatically authenticate on the remote server, automatic logon policy takes effect.
The automatic logon (auto-logon) policy determines when it is
acceptable for WinHTTP to include the default credentials in a
request. The default credentials are either the current thread token
or the session token depending on whether WinHTTP is used in
synchronous or asynchronous mode. The thread token is used in
synchronous mode, and the session token is used in asynchronous mode.
These default credentials are often the username and password used to
log on to Microsoft Windows.
Default security level is set to WINHTTP_AUTOLOGON_SECURITY_LEVEL_MEDIUM, which allows auto-logon only for intranet servers. Rules governing intranet/internet server classification are defined in the Windows internet options dialog and somewhat murky (at least in our case).
To insure correct auto-login I have lowered security level to WINHTTP_AUTOLOGON_SECURITY_LEVEL_LOW using request native handler config:
web::http::client::http_client_config make_config()
{
web::http::client::http_client_config config;
config.set_proxy(web::web_proxy::use_auto_discovery);
if (!m_wsUser.empty()) {
web::credentials cred(m_wsUser, m_wsPass);
config.set_credentials(cred);
}
config.set_nativehandle_options([](web::http::client::native_handle handle) {
DWORD dwOpt = WINHTTP_AUTOLOGON_SECURITY_LEVEL_LOW;
WinHttpSetOption(handle, WINHTTP_OPTION_AUTOLOGON_POLICY, &dwOpt, sizeof(dwOpt));
});
return config;
}
In my case this approach was acceptable, because server and clients are always inside organization network boundary. Otherwise this solution is insecure and should not be used.

WSO2 PEP Balana Framework executing in WebSphere

We plan to add a Policy Enforcement Point (PEP) into the WAS post login and transaction code handled by the WebSphere 8.5 "full" version. Our preliminary tests did throw unusual error messages, which pointed to an issue with loading of the AXIS web service classes and its belonging resource definition. The error showed up at the SSL protocol setup pointed to a missing key- and trust-store, or wrong location.
What handled the error was to change the Java class loader defaults in the browser administrative console, replacing the default PARENT-FIRST class loader behaviour, updating it to the new value of PARENT-LAST which gives preference to the web service classes directly delivered by the application. We also moved .jar libraries belonging to the Balana framework into the standard WEB_INF/lib directory. Having this updated, the application started to execute entitlement connections sending it to WSO2 IS server, interacting with the XACML PDP framework, sending and receiving XACML requests.

Consuming REST servce from PEGA 7 with HTTP Header parameter

I am not a PEGA developer. But this question is for any PEGA developer/admin. This is about an issue which I noticed recently while trying to integrate my application (using REST service) with PEGA 7.
I created a REST service from my application and hosted it with OAuth 2.0 authentication. PEGA application has to consume my service.
To test the connectivity from PEGA to my application, I'd created an OAuth token myself and shared the same with PEGA developers asking them to call my service directly by skipping the authorization calls.
Using any REST testing tool such as Chrome's REST console, APIgee, etc., I was able to test my REST service by just passing the http-header param as [param name: Authorization & param value: OAuth ].
But PEGA had an issue in directly supplying the http-header parameter to test my service from PEGA PRPC application.
My Questions for PEGA developers/Admins are,
is it difficult from PEGA to add a header param in http calls
On request, PEGA screen was shared with me while a developer attempted to test my service from PEGA. During which I noticed that PEGA did not have any trace logs to
capture the exact http request that was generated. Is it true that we could not see the http request (header/body) that was generated from the REST connector tool?
Adding a header parameter is relatively simple. To get information from a REST API in PEGA you define a Connect-REST rule. Sadly, I don't have enough reputation to post images in my answer but I uploaded a shot of the headers area to imgur which you can see here http://imgur.com/vWBm6dD. Make sure you tell your PEGA developers choose "Constant" as Map From and put the token in quotes in the "Map From Key" field like I did in the image.
Unfortunately, it is not possible to log the complete outgoing packet. If you set the logging level to DEBUG for the activity Rule-Connect-REST.pyInvokeRESTConnector it log a lot more information during the connection process, including the complete outgoing URL, but not the headers. For your PEGA developers, to change the logging level of this activity go to the Main Menu (click on DesignerStudio) -> System -> Tools -> Logs -> Logging Level Settings. There set the logger name to "Rule_Obj_Activity.pyInvokeRESTConnector.Rule_Connect_REST.Action" and the level to DEBUG.
If that's not enough to solve the problem your PEGA developers do have the option of adding in their own logging. Connector rules in PEGA are invoked rather than assembled, the code that creates the packet and makes the call to the remote service is in step 5 of that activity, pyInvokeRESTConnector. That activity can be private checked-out like any other so you could add in your own custom logging to make sure everything is being set up correctly there. However I would strongly advise them against overriding that activity in an application ruleset. Private checkouts are temporary so they are fine but an override is permanent and will also override all future updates if they decide to upgrade to another version of PRPC.
You could use the Tool Fiddler to see what exactly goes out of Pega to invoke your service.
For OAuth Authentication , make sure the Pega Authentication Profile is set to OAuth and the token is extracted properly.
Fiddler will help you see whats going on.

WSO2 API Key Manager

I am configuring our API Manager, but running into troubles authenticating via OAuth, seems to be an issue with the API Key Manager. I haven't dug into it yet, but does this come with the API Manager (as I have assumed) or is this a separate installation?
I had the same issue when using the wso2 api manager on a Amazon hosted machine, turn out that Thrift was not working correctly because some problem with multicasting and broadcasting.
What I did to get it working was to switch from ThriftClient to WSClient. If you have a huge amount of requests coming in then Thrift is the recommended solution from wso2 but in any "normal" case you will not have any differences between thrift and WS.
Here is how you switch:
Shut down the API Manager
Open up <api manager install dir>\repository\conf\api-manager.xml
Find ThriftClient
Change this to
<KeyValidatorClientType>WSClient</KeyValidatorClientType>
Start the API Manager
You may get some Warnings while starting up but, try it before you jump to the conclusion that it doesn't work.
Hope it helps!
you can use APIM manager product in a distributed setup as keymanger,gateway,store,publisher..but all functionality come in a single distribution.. ..
Go through the documentation for further guides
I was facing the same issue. Everything started when I created my own jks in order to use SSL without a self-signed certificate. I successfully created the jks and changed it in the carbon file. When I started the server, everything seemed ok; but when I used SOAPUI to test an API call, I got this (in the logs of the api manager):
APIAuthenticationHandler API authentication failure due to Unclassified Authentication Failure
I started digging what was the problem by enabling Debug level in the log4j.properties file, and then tried again a tested with SOAPUI and I got:
APISecurityException: Could not connect to <my api ip address> on port 10397
Then, I read the comment of OneMuppet and I checked that file and I found that the Thrift config has a host option, so I uncommented it:
<KeyValidatorClientType>ThriftClient</KeyValidatorClientType>
<ThriftClientPort>10397</ThriftClientPort>
<ThriftClientConnectionTimeOut>10000</ThriftClientConnectionTimeOut>
<ThriftServerPort>10397</ThriftServerPort>
This Line --> <ThriftServerHost>localhost</ThriftServerHost>
<EnableThriftServer>true</EnableThriftServer>
Save, restarted the server and everything start working correctly.
I got the same below issue after my installation, when i try to invoke the api service it is throwing below error:
900900 Unclassified Authentication Failure Error while accessing backend services for API key validation
After some random checks i have seen the axis2.xml file in /repository/conf/axis2 there it is refering a differnt ip's instead. I change these ip's to my local ip and restarted. The issue is resolved now.
I was facing the same issue. when I was trying to setup API Manager as an API Gateway in a different machine as per the steps given here,
https://docs.wso2.com/display/AM250/Publish+through+Multiple+API+Gateways
Once the setup is done and when I am trying to use this gateway URL, I was getting the below response,
{"fault":{"code":900900,"message":"Unclassified Authentication Failure","description":"Error while accessing backend services for API key validation"}}
After changing the KeyValidatorClientType value to WSClient from ThriftClient on the <api manager install dir>\repository\conf\api-manager.xml
It started working fine. And I was able to get the expected response.
If you changed the admin password, then you also have to update the repository/conf/api-manager.xml file with the new password. The 2 places I have changed (so far) are:
<AuthManager>
and
<APIKeyManager>
but there are other admin usernames in that file. No doubt, I'll get to them....

Unauthorized HTTP request with Anonymous authentication of SAP PI service

I have a .WSDL file from our client company, for which I need to use to call a web service. Their system is SAP (SAP PI). My application is a C# .NET 3.5 client developed in VS 2008. I added a Service Reference in Visual Studio using their provided .WSDL file. This created a reference class for me to use to call their service, and set up several bindings in the app.config file for me.
I did not change anything in the app.config file, but did create code to call their web service. However, when I call their webservice, I receive the following exception:
The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Basic realm="SAP NetWeaver Application Server ..."'.
(I modified slightly the string used in the 'Basic realm' section so as to not give it out.)
Did the app.config not get built correctly from the WSDL? Am I supposed to modify the app.config file somehow?
Things I've tried:
changed authenticationScheme in app.config from Anonymous to Basic
(as well as all the other authentication types)
changed realm string in app.config to match the realm in the exception message
set username/pw fields in the ClientCredentials.Username object in my code
Any pointers or help would be appreciated.
Edit: After some more investigation, I found that Visual Studio has several warnings about the extension element Policy and Policy assertions:
Custom tool warning: The optional WSDL extension element 'Policy'
from namespace 'http://schemas.xmlsoap.org/ws/2004/09/policy' was not
handled.
Custom tool warning: The following Policy Assertions were not Imported:
XPath://wsdl:definitions[#targetNamespace='urn:sap-com:document:sap:rfc:functions']/wsdl:binding[#name='Binding_FieldValidation']
Assertions: ...
I wasnt able to find out if this was related or not to my current issue with the authentication scheme. It does seem to be related, but I havent been able to find any solutions to getting these policy warnings resolved either. It seems WCF doesnt handle the statements in the wsdl very well.
Most SAP services dont support anonymous.
So pass some form of authentication data with the call.
User and password / X.509 Ticket...
If you are sending auth data with the call the try this
Ask the SAP guy to regenerate the WSDL with
No SAP assertions, No policy, SOAP 1.1.
You can also try and edit the WSDL by hand to remove the extra guff...
As a starting point, I'd verify that you can call the service successfully with the provided username and password. Use something like SoapUI to test that everything works correctly - just create a new project, import the WSDL provided by SAP PI, set the username and password and execute the call. You'll probably get some form of exception with an empty payload, but at least that'll verify that the username and password are correct.
Once you've verified that's working, check that your application is calling the service correctly and that the http basic authentication headers are being sent. You can confirm this by using a network monitoring tool and checking that the http request is being generated correctly. Something like netcat for Windows can do it - just make it listen to a port on your local machine and then specify localhost and the port as your SOAP endpoint.
Once you've verified both of those are correct, your call should succeed.
There must be the Basic authentication header missing or something wrong
with the credentials.
SAP PI always defaults to Basic Authentication if a Service is published via it's SOAP Adapter. I would investigate if WCF really does send out that header (e.g. Point your client endpoint to TCP Gateway and let TCP Gateway point to the SAP PI Endpoint from the WSDL).
About the Warnings: AFAIK the WSDL generated by SAP PI will always contain these Policy Tags, you can't really ommit it. What you can do is simply throw them out as they are not really validated