I am developing an application using kerberos authentication in a double-hop scenario : the client is connecting to a server witch needs to use the client's credentials to connect a SQL server.
I already did it using GSoap and GSS-API from kerberos MIT release; but I would have liked to use winHTTP to handle the authentication.
Yet, when I try to use winHTTP with GSOAP WINHTTP PLUGIN (gsoapwinhttp on code.google), the delegation is blocked by the Domain Controller. I want to keep this Active Directory configuration :
When I look at GSS-API kerberos ticket I found several flags allowing delegation such as fowardable or deleg_req_flag :
So my question is : Can I modify the winHTTP flags to have to allow delegation without changing the Domain Controller's configuration ?
Edit :
I'm using the option WINHTTP_AUTH_SCHEME_NEGOTIATE in setCredentials and WINHTTP_AUTOLOGON_SECURITY_LEVEL_LOW in setOption to be sure to use Kerberos or NTLM as specified in Microsoft website WinHttpSetCredentials.
Using Fiddler I checked the HTTP connection and it is using Kerberos but I still can't delegate to my next server.
I tried to use almost every possible options of setOption such as WINHTTP_ENABLE_SSL_REVERT_IMPERSONATION or everything that could look like delegation but I have a strange error when using this option :
End of file or no input: message transfer interrupted or timed out (629s recv send delay)
I tried to set a different recv_timeout but still the same error.
.
I've studied this type of kerberos-delegation problem a lot. You are experiencing the Kerberos double-hop problem. In that active-directory configuration screenshot you provided, you must configure delegation; right now delegation is not set. First item to try is open delegation,to do that select the radio button: Trust this computer for delegation to any service (Kerberos only). You set this on the computer account in AD which needs to use the client's credentials to connect a SQL server - not on the domain controller account. If your application is actually running on a domain controller, then that is a known issue and unsupported configuration which won't work - please move application to a member server of the domain.
Regarding those flags allowing delegation such as fowardable or deleg_req_flag shown as being set in the Fiddler trace, I'm not sure why they are shown as set, but they might have been set from the wrong account. From the account of the screenshot you posted, Kerberos delegation is not configured at all.
In your scenario, you must set Kerberos delegation on the computer account which is running the WinHTTP process, in the example shown below, that would be "Server1".
In the Kerberos Delegation properties of that account, you can specify either open delegation (top radio button as I stated above), or constrained delegation to the process on Server2 to which Server1 may forward the user credentials (the Kerberos service tickets).
Related
Using WinHTTP right now, and looking to switch over to cpprestsdk. I'm looking through the documentation, and I don't see anything about support for NTLM/Negotiate/Kerberos support. Am I missing something? I find it hard to believe that MS wouldn't have supported it, but I don't see any sample code on how you would use it.
The reason we need NTLM/Negotiate/Kerberos support is that we are running our client via RemoteApp, and want our users to only have to login once with their Domain Credentials when starting the app, and not have users prompted to enter passwords a second time.
It seems that Windows authentication is readily build into Casablanca (when used on a Windows machine). Take a look at src/http/client/http_client_winhttp.cpp. There you'll find a function "ChooseAuthScheme". To my understanding this will select the "most secure" authentication scheme that a server supplies. If the server e. g. claims to support both, "BASIC" and "NEGOTIATE", it will prefer and select the latter as the more secure scheme. Thus using Widows authentication should a very easy to use, just don't set any credentials (username/pwd) and try to connect to a server that supports Windows authentication and also announces this in the http "Authenticate" header (otherwise Casablanca will of course not attempt to use Windows authentication).
BUT
I'm also trying to use Windows authentication in Casablanca and I'm currently facing two problems:
Windows authentication sort of works as explained above in my scenario. Bewilderingly however sometimes it does not. This goes as far as that I can connect to a server from machine A with user Foo logged in and I cannot connect from machine B with user Bar logged in at the the very same time and with all machines being in the very same network segment without any router or proxy in between. From a Fiddler log I can see that in case of a failure Casablanca attempts a connect without any authentication and then receives a http 403 unauthorized from the server (which is to be expected and perfectly fine) but afterwards fails to resend the request with the NEGOTIATE in the headers, it simply aborts. On the contrary in the successful case there is a resent with the credentials being a base64 encoded blob (that in fact has binary content and presumably only MS knows what it means). I'm currently unclear about what triggers this, why this happens and how to resolve this. It could well be a bug in Casablanca.
I have a special scenario with a server that has a mixed operation mode where users should be able to connect via BASIC or Windows authentication. Let's not reason about the rationale of the following scenario and best practices, I'm dealing with a TM1 database by IBM and it's just what they implemented. So the users admissible for basic and Windows authentication don't necessarily overlap, one group must authenticate via Windows integrated (let's say domain users) and the other one must use basic authentication (let's say external users). So far I found no way (without patching Casablanca) to clamp the SDK to some mode. If the server announces BASIC and NEGOTIATE it will always switch NEGOTIATE as the more secure mode, making basic authentication inaccessible and effectively locking out the BASIC group. So if you have a similar scenario, this could equally be a problem for you, ChooseAuthScheme() tests several different authentication methods, NEGOTIATE, NTLM, PASSPORT, DIGEST and finally BASIC, in this sequence and will stubbornly select the first one that's supported on both, client and server, discarding all other options.
Casablanca (CpprestSDK) fully supports NTLM authentication. If server rejects request with status code 401/403 and header WWW-Authenticate, library will handle it internally using most secure authentication method. In case of NTLM you could either specify login/password pair, or use automatic logon (Windows), based on calling thread current user token.
However, when I tried using auto-logon feature, it unexpectedly failed on some workstations (case 1 in Don Pedro answer).
Windows version of Cpprest uses WinHTTP internally. When you try to automatically authenticate on the remote server, automatic logon policy takes effect.
The automatic logon (auto-logon) policy determines when it is
acceptable for WinHTTP to include the default credentials in a
request. The default credentials are either the current thread token
or the session token depending on whether WinHTTP is used in
synchronous or asynchronous mode. The thread token is used in
synchronous mode, and the session token is used in asynchronous mode.
These default credentials are often the username and password used to
log on to Microsoft Windows.
Default security level is set to WINHTTP_AUTOLOGON_SECURITY_LEVEL_MEDIUM, which allows auto-logon only for intranet servers. Rules governing intranet/internet server classification are defined in the Windows internet options dialog and somewhat murky (at least in our case).
To insure correct auto-login I have lowered security level to WINHTTP_AUTOLOGON_SECURITY_LEVEL_LOW using request native handler config:
web::http::client::http_client_config make_config()
{
web::http::client::http_client_config config;
config.set_proxy(web::web_proxy::use_auto_discovery);
if (!m_wsUser.empty()) {
web::credentials cred(m_wsUser, m_wsPass);
config.set_credentials(cred);
}
config.set_nativehandle_options([](web::http::client::native_handle handle) {
DWORD dwOpt = WINHTTP_AUTOLOGON_SECURITY_LEVEL_LOW;
WinHttpSetOption(handle, WINHTTP_OPTION_AUTOLOGON_POLICY, &dwOpt, sizeof(dwOpt));
});
return config;
}
In my case this approach was acceptable, because server and clients are always inside organization network boundary. Otherwise this solution is insecure and should not be used.
I am looking at two WSO2 client samples that call the user management web service. The first is a simple client, the second is a web app.
The first client sets the system SSL properties and then instantiates a WSUserStoreManager object.
The second one, the web app, does not set SSL properties at all, and instead instantiates a RemoteUserStoreManagerServiceStub.
Could someone please explain why these differences? What service to call when two similar are available (a regular and a 'remote' one)? Isn't it always necessary to set up the SSL properties when calling a https endpoint? Thanks.
if you are calling to HTTPS end point, you need to set the SSL trust store properties to trust the server. But it is under control of the client, If client wants, it can trust it, if not it can ignore. If you want to ignore, you want to override default TrustManager of java.
However, normally java has a trust store file called "cacerts" where it contains all trusted CA certificate. But WSO2IS server's certificate is a self signed one and java can not trust it. Therefore, if you want, you can import certificate in to the "cacerts file. I am not sure about why there are two different in client and web app. However, if you are calling HTTPS, trust must be created. Please check web app source more. Some time, it may have ignore the trust. As web app is run in a app server, sometime java SSL trust properties may have been set to correct file.
We use NTLM auth to access an ASP.net web services from our MonoTouch app and everything works fine.
One of our customers uses the same app and the NTLM auth fails from our app but works from the iPad's Safari browser.
Looking at the packet flow from the customer, the server does not return NTLMSSP_CHALLENGE, when our app sends NTLMSSP_NEGOTIATE message.
Looking the differences between our app's NTLMSSP_NEGOTIATE message and iPad's Safari same message
Our MT app sets the NTLM flags to 0xb203 and Safari sets this to 0x88207.
The NegotiateNtlm2Key is set to 0 in our app and 1 in Safari
Our app also sends the calling workstation domain and name fields whereas Safari send both as null.
The client's server is Windows Server 2003 and they also use Kerberos as their main authentication scheme and fall back on NTLM.
Would setting the NegotiateNtlm2Key flags in Mono.Security.Protocol.Ntlm.NtlmFlags help?
NTLMv2 Session and NTLMv2 Authentication has now been implemented in Mono (mono/master commit 45745e5).
See this article for a description of the different NTLM versions.
By default, Mono now uses NTLMv2 Session Authentication whenever the server supports it and falls back to LM & NTLM otherwise.
The default behavior can be configured by using the new Mono.Security.Protocol.Ntlm.Type3Message.DefaultAuthLevel property in Mono.Security.dll (see Type3Message.cs and NtlmAuthLevel.cs in mcs/class/Mono.Security/Mono.Security.Protocol.Ntlm).
This is similar to the Lan Manager Authentication Level in Windows.
Update 01/26/13
There has been an issue with Microsoft Server 2008 RC2 not accepting the domain name that it sent back in the Type 2 Message's Target Name (or Domain Name from the Target Info block).
Therefore, we are now using the domain name from the NetworkCredential to allow the user to specify the desired domain. This is also the domain name that's initially being sent to the server in the Type 1 Message.
Simply setting flags ? Maybe but IMHO that's quite unlikely.
That code base was written in 2003 (and updated in 2004) and I'm pretty sure that I (as the author of the low-level code) did not have access to a Windows 2003 server or a Kerberos-enabled domain at that time.
The amount of required change, for a fallback, might not be too large (but I would not bet 5$ on that ;-) if you already have the environment to test it. I'm 100% positive that the Mono project would be happy to receive patches to enable this. You can also fill a bug report (priority enhancement) to ask for this feature at http://bugzilla.xamarin.com
An alternative is to use the iOS API, which I assume Safari is using, to communicate with the ASP.NET web service and deserialize the data yourself. Hard to say which options is more complex.
I have a web service running under IIS7 on a server with a host header set so that it receives requests made to http://myserver1.mydomain.com.
I've set Windows INtegrated Authentication to Enabled and everything else (basic, anonymous, etc) to Disabled.
I'm testing the web service using a powershell script, and it works fine when I run it from my workstation against http://myserver1.mydomain.com
However, when I run the same exact script on the IIS server itself, I get a 401-Unauthorized message.
In addition, I've tried installing the web service on a second server, myserver2.mydomain.com. Again I can call my test script fine from BOTH my workstation and from myserver1.
So it seems the only issue is when the client is on the same box as the web server itself - somehow the windows credentials are not being passed or recognized.
I tried playing with IE settings on myserver1 (checked and unchecked 'Enable Windows Integrated Authentication', and added the URL to Local Sites). That did not seem to have an effect.
When I look at the IIS logs, I see the 401 unauthorized line but very little other information.
I see basically the same behavior when testing with IE (v9) - works from my workstation but not when IE is running on the IIS server.
I found the answer after several hours:
By default, there is something called a LoopbackCheck which will reject windows authentication if the host header used for the site does not match the local host's name. This behavior will only be seen when the client is on the local host. The check is there to defeat possible reflection attacks.
More details here:
http://support.microsoft.com/kb/896861
The kb item discusses ways to disable the Loopback check, but I ended up just switching from using host headers to ports to distinguish the different sites on the IIS server.
Thanks to those who gave assistance.
Try checking the actual credential that is being passed when you are running on the server itself. Often times you will be running on some system account that doesn't have access to the resource in question.
For example, on your box your credentials are running as...
MYDOMAIN\MYNAME
and the server will be something like...
SYSTEM\SYSTEM_ACCOUNT
and so this will fail because 'SYSTEM\SYSTEM_ACCOUNT' doesn't have credentials.
If this is the case, you can fix the problem in one of two ways.
Give 'SYSTEM\SYSTEM_ACCOUNT' access to the resource in question. Most people would avoid this strategy due to security concerns (which is why the account has no access in the first place).
Impersonate, or change the credentials of the client manually to something that does have access to the resource, 'MYDOMAIN\MYNAME' for example. This is what most people would probably go with, including myself.
I have a FinalBuilder project where I deploy an ASP.Net website to a remote folder, configured as a website in IIS.
As part of my build script, I want to use the FinalBuilder action HTTP Get File to help determine whether my deployment was succesful.
I'm having difficulty, because the website is configured (under IIS 6) to use Integrated Windows Authentication, and anonymous access is not enabled.
Now the HTTP Get File action, has only a handful of properties, one of which is a security section, containing a UserName and Password. Great I thought! I can just put some valid credentials in there, which FinalBuilder will impersonate, whilst retrieving my file.
It turns out I was mistaken. I receive the following error:
Error retrieving url : Socket Error # 10061
Connection refused.
If I run the action without setting the Security Username and Password, I get the following error:
Error retrieving url : HTTP/1.1 401 Unauthorized Response Code : 401
Here are some facts to help with the context of my problem.
I'm running FinalBuilder 6 Professional, upon a Windows Server 2003 installation, and deploying my ASP.Net website to a remote IIS6 server within our corporate LAN.
If I configure IIS on the remote server to allow Anonymous access, I can run the HTTP Get File action without error. However, running this particular site with anon access is not acceptable in our situation.
Can anyone help suggest a workaround?
For a definitive answer, I think the Finalbuilder Forum is probably your best bet.
My guess, though, is that the HTTP library used by FB doesn't support Windows authentication, and is failing because no common authentication method can be negotiated. Since HTTPS isn't supported either by the 'HTTP Get File action', the possible workaround of allowing basic authentication on your site isn't a good idea, as you would be passing credentials over the network in plain text.
The only remaining workaround I can think of (other than waiting for a future FB release), is creating your own FB action to retrieve the file. Using the .NET Framework System.Net.WebClient, that should be trivial. Just start with a standalone EXE to make sure everything works, then refactor it into a 'real' action using FinalBuilder Action Studio (if that's even required: spawning an external EXE may work just fine in your case).