WS-Federation protocol is deprecated - wso2

I am working with Identity and Access Control and I want to use Ws-federation protocol to enable browser based SSO (Single Sign On). I want to know whether this protocol is deprecated due to security reasons or not ?

No, it is not deprecated. Is one of the main protocols implemented in WIF. (The framework you'd use if you are on the Microsoft platform).

Related

Using Windows Authentication with cpprestsdk?

Using WinHTTP right now, and looking to switch over to cpprestsdk. I'm looking through the documentation, and I don't see anything about support for NTLM/Negotiate/Kerberos support. Am I missing something? I find it hard to believe that MS wouldn't have supported it, but I don't see any sample code on how you would use it.
The reason we need NTLM/Negotiate/Kerberos support is that we are running our client via RemoteApp, and want our users to only have to login once with their Domain Credentials when starting the app, and not have users prompted to enter passwords a second time.
It seems that Windows authentication is readily build into Casablanca (when used on a Windows machine). Take a look at src/http/client/http_client_winhttp.cpp. There you'll find a function "ChooseAuthScheme". To my understanding this will select the "most secure" authentication scheme that a server supplies. If the server e. g. claims to support both, "BASIC" and "NEGOTIATE", it will prefer and select the latter as the more secure scheme. Thus using Widows authentication should a very easy to use, just don't set any credentials (username/pwd) and try to connect to a server that supports Windows authentication and also announces this in the http "Authenticate" header (otherwise Casablanca will of course not attempt to use Windows authentication).
BUT
I'm also trying to use Windows authentication in Casablanca and I'm currently facing two problems:
Windows authentication sort of works as explained above in my scenario. Bewilderingly however sometimes it does not. This goes as far as that I can connect to a server from machine A with user Foo logged in and I cannot connect from machine B with user Bar logged in at the the very same time and with all machines being in the very same network segment without any router or proxy in between. From a Fiddler log I can see that in case of a failure Casablanca attempts a connect without any authentication and then receives a http 403 unauthorized from the server (which is to be expected and perfectly fine) but afterwards fails to resend the request with the NEGOTIATE in the headers, it simply aborts. On the contrary in the successful case there is a resent with the credentials being a base64 encoded blob (that in fact has binary content and presumably only MS knows what it means). I'm currently unclear about what triggers this, why this happens and how to resolve this. It could well be a bug in Casablanca.
I have a special scenario with a server that has a mixed operation mode where users should be able to connect via BASIC or Windows authentication. Let's not reason about the rationale of the following scenario and best practices, I'm dealing with a TM1 database by IBM and it's just what they implemented. So the users admissible for basic and Windows authentication don't necessarily overlap, one group must authenticate via Windows integrated (let's say domain users) and the other one must use basic authentication (let's say external users). So far I found no way (without patching Casablanca) to clamp the SDK to some mode. If the server announces BASIC and NEGOTIATE it will always switch NEGOTIATE as the more secure mode, making basic authentication inaccessible and effectively locking out the BASIC group. So if you have a similar scenario, this could equally be a problem for you, ChooseAuthScheme() tests several different authentication methods, NEGOTIATE, NTLM, PASSPORT, DIGEST and finally BASIC, in this sequence and will stubbornly select the first one that's supported on both, client and server, discarding all other options.
Casablanca (CpprestSDK) fully supports NTLM authentication. If server rejects request with status code 401/403 and header WWW-Authenticate, library will handle it internally using most secure authentication method. In case of NTLM you could either specify login/password pair, or use automatic logon (Windows), based on calling thread current user token.
However, when I tried using auto-logon feature, it unexpectedly failed on some workstations (case 1 in Don Pedro answer).
Windows version of Cpprest uses WinHTTP internally. When you try to automatically authenticate on the remote server, automatic logon policy takes effect.
The automatic logon (auto-logon) policy determines when it is
acceptable for WinHTTP to include the default credentials in a
request. The default credentials are either the current thread token
or the session token depending on whether WinHTTP is used in
synchronous or asynchronous mode. The thread token is used in
synchronous mode, and the session token is used in asynchronous mode.
These default credentials are often the username and password used to
log on to Microsoft Windows.
Default security level is set to WINHTTP_AUTOLOGON_SECURITY_LEVEL_MEDIUM, which allows auto-logon only for intranet servers. Rules governing intranet/internet server classification are defined in the Windows internet options dialog and somewhat murky (at least in our case).
To insure correct auto-login I have lowered security level to WINHTTP_AUTOLOGON_SECURITY_LEVEL_LOW using request native handler config:
web::http::client::http_client_config make_config()
{
web::http::client::http_client_config config;
config.set_proxy(web::web_proxy::use_auto_discovery);
if (!m_wsUser.empty()) {
web::credentials cred(m_wsUser, m_wsPass);
config.set_credentials(cred);
}
config.set_nativehandle_options([](web::http::client::native_handle handle) {
DWORD dwOpt = WINHTTP_AUTOLOGON_SECURITY_LEVEL_LOW;
WinHttpSetOption(handle, WINHTTP_OPTION_AUTOLOGON_POLICY, &dwOpt, sizeof(dwOpt));
});
return config;
}
In my case this approach was acceptable, because server and clients are always inside organization network boundary. Otherwise this solution is insecure and should not be used.

wince 6 smart device Could not establish secure channel for SSL / TLS error

I have a web service which i need to access through https. We have a workbout pro 4 with win ce 6.0 running on it. When we were developing our app we had tested it through http. wihtout any problem. When we went live and needed access to https based server we have received the error stated on subject field under VS 2008 Smart Device Project. On the device we receive an error "could not display..." . We have tried to import the standard certificate issued by global si. We still have no success accessing the web service. We can acces the web service on phone, tablet, pc but not with Pro 4:). It would be kind if anyone can share his/her experience with https based web service access or can guide us to over come our problem.
Secure connection is not implemented on CE fully. Something to do with cert management. Here is what i am considering for my project and it gives a little more info what the issue is. http://labs.rebex.net/HTTPS
Here is some quotes from the site in case its down or something.
.NET Compact Framework does not support TLS 1.2, 1.1, SNI or SHA-2
based certificates.
.NET CF's HttpWebRequest is outdated. It does not support TLS 1.2 or
1.1, it doesn't support Server Name Identification (SNI), and it does not support SHA-2 in X509 certificates. It also suffers from several
authentication-related bugs with no known workaround. This makes it
unusable in a growing number of scenarios, and Microsoft will never
fix this because it no longer cares about these legacy platforms.
Fortunately, it's now possible to work around these shortcomings using
a beta version of Rebex HTTPS library. It features a HttpWebRequest
replacement object for .NET Compact Framework that plugs into the
existing .NET CF WebRequest API and provides the features the default
HTTP/HTTPS provider lacks. Most importantly, it adds support for TLS
1.2, TLS 1.1, SNI and SHA-2, it works even on old devices based on Windows CE 5.0 and it makes it simple to add TLS 1.2 support to
existing SOAP web service clients.
We had a similar issue on CE 7.0.
HTTPS connections using SHA1 certificates would work, however ones with SHA2 certificates would return the error
Could not establish trust relationship with remote server
If possible, try testing your code against a host that uses a SHA1 certificate to see if the issue might be related to missing SHA2 support in CE 6.0.
I should mention that we never formally approached Microsoft to get confirmation on whether SHA2 was supported or not in CE 6.0/7.0, it was just our conclusion after numerous tests that it wasn't.

MonoTouch support for accessing Mono.Security.Protocol.Ntlm.NtlmFlags

We use NTLM auth to access an ASP.net web services from our MonoTouch app and everything works fine.
One of our customers uses the same app and the NTLM auth fails from our app but works from the iPad's Safari browser.
Looking at the packet flow from the customer, the server does not return NTLMSSP_CHALLENGE, when our app sends NTLMSSP_NEGOTIATE message.
Looking the differences between our app's NTLMSSP_NEGOTIATE message and iPad's Safari same message
Our MT app sets the NTLM flags to 0xb203 and Safari sets this to 0x88207.
The NegotiateNtlm2Key is set to 0 in our app and 1 in Safari
Our app also sends the calling workstation domain and name fields whereas Safari send both as null.
The client's server is Windows Server 2003 and they also use Kerberos as their main authentication scheme and fall back on NTLM.
Would setting the NegotiateNtlm2Key flags in Mono.Security.Protocol.Ntlm.NtlmFlags help?
NTLMv2 Session and NTLMv2 Authentication has now been implemented in Mono (mono/master commit 45745e5).
See this article for a description of the different NTLM versions.
By default, Mono now uses NTLMv2 Session Authentication whenever the server supports it and falls back to LM & NTLM otherwise.
The default behavior can be configured by using the new Mono.Security.Protocol.Ntlm.Type3Message.DefaultAuthLevel property in Mono.Security.dll (see Type3Message.cs and NtlmAuthLevel.cs in mcs/class/Mono.Security/Mono.Security.Protocol.Ntlm).
This is similar to the Lan Manager Authentication Level in Windows.
Update 01/26/13
There has been an issue with Microsoft Server 2008 RC2 not accepting the domain name that it sent back in the Type 2 Message's Target Name (or Domain Name from the Target Info block).
Therefore, we are now using the domain name from the NetworkCredential to allow the user to specify the desired domain. This is also the domain name that's initially being sent to the server in the Type 1 Message.
Simply setting flags ? Maybe but IMHO that's quite unlikely.
That code base was written in 2003 (and updated in 2004) and I'm pretty sure that I (as the author of the low-level code) did not have access to a Windows 2003 server or a Kerberos-enabled domain at that time.
The amount of required change, for a fallback, might not be too large (but I would not bet 5$ on that ;-) if you already have the environment to test it. I'm 100% positive that the Mono project would be happy to receive patches to enable this. You can also fill a bug report (priority enhancement) to ask for this feature at http://bugzilla.xamarin.com
An alternative is to use the iOS API, which I assume Safari is using, to communicate with the ASP.NET web service and deserialize the data yourself. Hard to say which options is more complex.

Building a secure web service without buying (and renewing) a certificate

The goal: a web service, secure, that will be called by exactly two clients, both outside the local network. The most obvious way to secure a web service is via https, obtaining a certificate from some CA. The problem is that this is a silly waste of money. The whole point of a CA is that it is a publicly trusted authority, so I don't have to verify my identity to every single person who wants to use my web page, the CA is doing that for them. However, when I'm dealing with a very small number of known clients, rather than the wide open public, I don't need anyone to vouch for me. We can do verification through our own channels.
Is there any way to accomplish this? Ideally, I'd be able to operate https with a certificate recognized by those calling my service, and if nobody else recognizes the certificate as valid, I don't care. I don't want them calling this service anyway. This should be a fairly common need in B2B data transfers (fixed-endpoint communications, rather than services intended for public consumption), and it is easy to do if you're transferring actual files (PGP-style encryption lets you simply verify and import one another's keys directly). But it isn't clear to me that this is possible with web sessions. It sure should be, if it is not. I have found some documentation of self-signed certificates, but they all seem to be intended for development purposes only, or internal use only, and expire quickly or require being on the same network.
Is there a good way to achieve this? Or am I going to have to encrypt the contents of the web service call instead? The latter is less desirable, because it would require the users of this service to add encryption code to their client applications (which assumes they are building these on a platform which easily can add support for common encryption routines, something that may or may not be true) rather than just relying on the standard, https framework.
I'm working on the Windows (IIS/ASP.NET) platform, if that makes any difference.
Creating your own CA and generating self-signed certificates is the way to go. There is no reason why they must be for development only, or expire quickly. You will be in control of this.
When I implemented this in a Java environment, the most useful resource I found was on Baban's Weblog. You can probably find a resource more relevant to your IIS environment.
To offer a secure service you don't need any certificate, only an https link. You are right that, in your case, a certificate does nothing for you. If your visitor insists on a certificate, then I second #sudocode's answer.
Our old authorization service used certificates, but in rebuilding it we got rid of the certificates and went to an Amazon ec2 style security for the services.

Why is RPC over HTTP a secutity problem?

I am currently reading on Web Services. There is a SOAP tutorial at http://www.w3schools.com/soap/soap_intro.asp . The following paragraph is from that page:
"Today's applications communicate using Remote Procedure Calls (RPC) between objects like DCOM and CORBA, but HTTP was not designed for this. RPC represents a compatibility and security problem; firewalls and proxy servers will normally block this kind of traffic."
I don't understand this. Can someone explain it to me, please. Escpecially I want to know, why RPC is a security problem (at lease over HTTP). Knowing why exactly it is a compatibility problem would be nice, too.
The point they're making is that "traditional RPC" sometimes uses unusual low-level network protocols that often get blocked by corporate firewalls. Because SOAP uses HTTP, it's traffic is "indistinguishable" from normal web page views, and so is not caught out by these firewalls.
Not too sure about the security point, I think they're probably implying that HTTP can easily be secured over HTTPS and that proprietary RPC protocols often don't. Of course, this is protocol dependant, not all RPC protocols will be insecure, and many of them can be tunnelled over HTTPS.
Regarding compatibility, the problem is that it's not obvious to make something that uses DCOM talk to something that uses CORBA, for example. One of the aims of SOAP is to provide interoperability, so as to harmonize the way this sort of communication is implemented. (There may still be a few glitches regarding interoperability with SOAP, depending on the tools you use.)
Regarding security, for a long time, policies have been made around using port numbers to distinguish applications: if you want to block a certain service (say NNTP), you block its port at the firewall level. It makes it easy to have a coarse control over which applications may be used. What SOAP over HTTP does is pushing the problem at the layer above. You can no longer distinguish which application or service is used from the port number at the TCP level, instead, you would have to be able to analyse the content of the HTTP message and the SOAP messages to authorize certain applications or services.
SOAP mostly uses HTTP POST to send its messages: that's using HTTP as a transport protocol, whereas HTTP is a transfer protocol, therefore not using HTTP in accordance to the web architecture (SOAP 2 may have attempted to improve the situation). Because almost everyone needs access to the web nowadays, it's almost guaranteed that the HTTP ports won't be blocked. That's effectively using a loop-hole, if no security layer is added on top of this.
This being said, in terms of security, there are advantages in using HTTP for SOAP communication as there is more harmonization in terms of existing HTTP authentication systems for example. What the SOAP/WS-* stack attempts to do is to harmonize the "RPC" communications, independently of the platform. It's not a case of "SOAP is secure" v.s. "DCOM/CORBA isn't", you still have to make use of its security components, e.g. WS-Security, and you may have been able to achieve a reasonable level of security with other systems too.