I had a problem that Fiddler wasn't showing my web service calls made from my application (running locally). I found and solved my problem.
So my question is not how, but why does Fiddler not show web service traffic? I have a very limited understanding of how network traffic works so this might be quite simple/obvious. All I'm able to decipher is:
I don't think it has anything to do with HTTPS, as I can see HTTPS requests in Fiddler (decoded if I want through Fiddler's settings).
I copied a piece of code new WebProxy("127.0.0.1", 8888); in order to get it to work so it must have something to do with proxies?
This is an ASP.NET application in case that makes a difference.
Really old question but:
While the answer and comments hint towards the right solution, they are far from answering the question.
Fiddler sees traffic by your user account. Since web services run by the application pool identity, fiddler cannot see their traffic.
The easiest solution (and the only one that worked for me) is to change the website application pool user to run under your account
Simply:
Open IIS
Find your website application pool name (right click website -> Manage Website -> Advanced Settings -> Listed under Application Pool)
Go to application pool advanced settings (Application Pools -> Right click your desired application pool -> Advanced Settings)
Change User Account to your account (Identity -> ... -> Custom Account -> Set)
As noted above:
That first paragraph was just the explanation I needed: When Fiddler launches and attaches, it adjusts the current user’s proxy settings to point at Fiddler, running on 127.0.0.1:8888 by default. That means that traffic from most applications automatically flows through Fiddler without any additional configuration steps. Although I guess I should also thank Eric as he appears to be the one who wrote it!
References
Capturing Traffic for .Net Services with Fiddler
adding the following content inside the config is also a solution.
<system.net>
<defaultProxy enabled = "true">
<proxy bypassonlocal="false" proxyaddress="http://127.0.0.1:8888" />
</defaultProxy>
</system.net>
Also, if the traffic from the web service is pointing to another application in same localhost, try using the machine name instead of localhost in the request url.
Related
I have Shibboleth configured on an IIS server and am using it protect a .NET application.
I need authenticated access for users accessing the application over the web and for that Shibboleth is working fine.
The application also hosts web services which need to be accessed by other applications in the same server and for that working with Shibboleth is a challenge since web service clients cannot deal with the log in page.
Is it possible to configure Shibboleth to ignore requests coming from the same server for example by checking the IP address?
It won't directly answer your question, but I can share a workaround I found and hope it can help with your problem too.
Define another website in IIS pointing to the same folder as the initial one, and make it only respond to a different domain (like something.local). Then in IP Address and Domain Restrictions, make sure only 127.0.0.1 is allowed to access it.
In C:\Windows\System32\drivers\etc open the file "hosts" in Notepad (running with Administrator privileges). Add the line "127.0.0.1 something.local" (no quotes; make sure the domain is the same one you defined before)
Now, make the webservices call the application by the new domain.
Say we have hosted a few webservices over over https://mycompany.com/Service
e.g.
https://mycompany.com/Service/Service1
https://mycompany.com/Service/Service2
https://mycompany.com/Service/Service3
As you can see on mycompany.com we have hosted 3 webservices each having their distinct urls.
What we have is a Jboss instance with 3 different web wars deployed in it. When someone hits the service it gets past our firewall and then teh load balancer redirects to Jboss on port 8080 on the requried path and it gets serviced.
the 3 services are consumed by 3 different clients. My question if say Client1 using Service 1 is only given out the url corresponding to it can they use some kind of scanner that can also inform them that Service2 and Service3 are alaso available on mycompany.com/Service?
Irrespective of clients - can anyone simply use some scanner tool to identify what Service Endpoints are exposed on the host?
Kindly note they are a mix of SOAP (WSDL) and Rest based services deployed on same instance of Jboss.
Yes, someone can scan for those endpoints. Their scanner would generate a bunch of 404s in your logs, because it would have to guess the other URLs. If you have some kind of rate limiting firewall, it might take them quite a long time. You should be checking the logs regularly anyway.
If you expose your URL to the public internet, relying on people not finding it is just security via obscurity. You should secure each URL using application-level security, and assume that the bad guys already have the URL.
You may want to consider adding subdomains for the separate applications (e.g. service1.mycompany.com, service2.mycompany.com) - this will make firewalling easier.
Well. I created reference, tested on local machine all is well. Deployed solution to production server and here we go:
An error occurred while trying to make a request to URI
This could be due to attempting to access a service in a cross-domain
way without a proper cross-domain policy in place, or a po
From what I gathered - it's security measure to prevent something (not sure what). Well. I can't make provider to put clientaccesspolicy.xml and crossdomain.xml.
What is my options? Looks like either running Silverlight app in elevated mode or.. ?
I don't want to require elevated trust.
The only way I know is to call my server and make call to webservice from my server returning data back to client. Seems like too much overhead. Is there any better way? Really frustrating.
I'm afraid your options are limited here. In compliance with Same Origin Policy the cross domain policy file is a must. Here you can find an example why. Personally I would go down the route of proxying the remote web service via your hosting server if you can't influence the provider.
I apologize in advance if the question is ridiculous.
I have an asmx service running in Azure (HTTP - no SSL).
I have a WPF app that loads a X509Certificate2 and adds it to the request by doing the following:
X509Certificate2 cert = new X509Certificate2("...");
webRequest.ClientCertificates.Add(cert);
In the web service I get the certificate by
new X509Certificate2(this.Context.Request.ClientCertificate.Certificate)
And then I load a cert (that I have both uploaded to the Azure control panel and added to my service definition file) by using the following sample:
var store = new X509Store(StoreName.My, StoreLocation.LocalMachine);
store.Open(OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly);
X509Certificate2Collection certs = store.Certificates.Find(X509FindType.FindBySubjectName, certName, true);
And then I validate by doing the following:
clientCert.Thumbprint == certs[0].Thumbprint
Now unfortunately I get an exception (System.Security.Cryptography.CryptographicException: m_safeCertContext is an invalid handle) as soon as I do
Request.ClientCertificate.Certificate
So I have a few questions. How do I avoid the exception. This answer states I need to modify an IIS setting, but how can I do that in Azure?
In any case is this even the proper way to do certificate authentication?
Thanks!
You can use command scripts to modify IIS, in combination with appcmd.exe.
For a quick example (disabling timeout in an application pool), take a look at this sample by Steve Marx.
In this example, you'd call DisableTimeout.cmd as a startup task. For more info on creating startup tasks, you can watch this episode of Cloud Cover Show. There should be a lab on startup tasks in the Platform Training Kit as well.
Just remember that any type of IIS configuration change should be made via an automated task at startup. If you manually change IIS via RDP, those changes won't propagate to all of your instances, and won't remain persistent in the event of hardware failure or OS update.
You can remote into your azure instances to manage IIS. As for a way to do it globally for all instances at once, I'm not sure. That would be an interesting side project though.
http://learn.iis.net/page.aspx/979/managing-iis-on-windows-azure-via-remote-desktop/
I have a web service running under IIS7 on a server with a host header set so that it receives requests made to http://myserver1.mydomain.com.
I've set Windows INtegrated Authentication to Enabled and everything else (basic, anonymous, etc) to Disabled.
I'm testing the web service using a powershell script, and it works fine when I run it from my workstation against http://myserver1.mydomain.com
However, when I run the same exact script on the IIS server itself, I get a 401-Unauthorized message.
In addition, I've tried installing the web service on a second server, myserver2.mydomain.com. Again I can call my test script fine from BOTH my workstation and from myserver1.
So it seems the only issue is when the client is on the same box as the web server itself - somehow the windows credentials are not being passed or recognized.
I tried playing with IE settings on myserver1 (checked and unchecked 'Enable Windows Integrated Authentication', and added the URL to Local Sites). That did not seem to have an effect.
When I look at the IIS logs, I see the 401 unauthorized line but very little other information.
I see basically the same behavior when testing with IE (v9) - works from my workstation but not when IE is running on the IIS server.
I found the answer after several hours:
By default, there is something called a LoopbackCheck which will reject windows authentication if the host header used for the site does not match the local host's name. This behavior will only be seen when the client is on the local host. The check is there to defeat possible reflection attacks.
More details here:
http://support.microsoft.com/kb/896861
The kb item discusses ways to disable the Loopback check, but I ended up just switching from using host headers to ports to distinguish the different sites on the IIS server.
Thanks to those who gave assistance.
Try checking the actual credential that is being passed when you are running on the server itself. Often times you will be running on some system account that doesn't have access to the resource in question.
For example, on your box your credentials are running as...
MYDOMAIN\MYNAME
and the server will be something like...
SYSTEM\SYSTEM_ACCOUNT
and so this will fail because 'SYSTEM\SYSTEM_ACCOUNT' doesn't have credentials.
If this is the case, you can fix the problem in one of two ways.
Give 'SYSTEM\SYSTEM_ACCOUNT' access to the resource in question. Most people would avoid this strategy due to security concerns (which is why the account has no access in the first place).
Impersonate, or change the credentials of the client manually to something that does have access to the resource, 'MYDOMAIN\MYNAME' for example. This is what most people would probably go with, including myself.