Web service fails with org.apache.axis2.AxisFault: The system cannot infer the transport information from the [my URL] URL - web-services

We have a strange situation. It's a web service (svc1) that calls another web service (svc2) on a different box, both in websphere. Works in every previous environment.
But recently they built out another staging environment as largely a clone of a working one. The service is failing with this message everytime svc1 attempts to call svc2.
Caused by: org.apache.axis2.AxisFault: The system cannot infer the transport information from the [svc2's URL] URL.
at org.apache.axis2.description.ClientUtils.inferOutTransport(ClientUtils.java:81)
at org.apache.axis2.client.OperationClient.prepareMessageContext(OperationClient.java:304)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:180)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:165)
at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.execute(AxisInvocationController.java:578)
at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.doInvoke(AxisInvocationController.java:127)
at org.apache.axis2.jaxws.core.controller.impl.InvocationControllerImpl.invoke(InvocationControllerImpl.java:93)
at org.apache.axis2.jaxws.client.proxy.JAXWSProxyHandler.invokeSEIMethod(JAXWSProxyHandler.java:419)
... 45 more
The URL is correct. We can point the not-working apps at a database supporting a working set of apps and it works, but when we point the working apps from that other environment at the not-working environment's DB, it stops working.
This seems to indict the DB, yet the error has nothing to do with the DB. Svc1's only DB call BEFORE the call to svc2 happens well before the service call and works fine according to logs. The logs indicate svc2 never gets the request. So how can database be the problem?
I know this isn't alot to go on, but does anyone even have suggestions on where to look to narrow this down? I can't believe the DB is the problem based on the code and when it's failing...yet the DB swapping test seems to imply it is.

I might be pointing out the obvious, but the error message is telling you one of two things: either svc1 isn't configured correctly to invoke svc2, or svc2 isn't up and running.
For clarity, if I label the working environment: enviornmentA and the not-working environment: environmentB.
Based on what you said, you were able to use environmentB.svc1 to invoke environmentA.svc2? If that is true, than environmentB.svc1 is configured correctly and working. Which leaves svc2. You said svc2 doesn't receive the request, which means its up at least. Well, like you'd mention this isn't much to go on but I'd make sure you configured svc2 correctly, since it is up. Configuring a Web Service can be complicated, but one of the things that can be configured is making a Web Service available to client invocation. So you might want to check out this link for information on how to configure it:
https://www.ibm.com/support/knowledgecenter/SSAW57_9.0.0/com.ibm.websphere.nd.multiplatform.doc/ae/twbs_publishwsdl.html.
There are a bunch of similar articles on configuring your Web Service that might help you to solve what's missing, so I would verify that the configuration for enviornmentB.svc2 matches enviornmentA.svc2 since you know that one is working.

Related

HTTP 407 Proxy Authentication Required while accessing Amazon S3

I have tried everything but I cant seem to fix this issue that is happening for only one client behind a corporate proxy/firewall. Our Silverlight application connects to Amazon S3 for downloading/Uploading some documents. On one client and one client only it returns a 407 error and after that the application fails to save anything.
Inner Exception:
System.ServiceModel.ProtocolException: [UnexpectedHttpResponseCode]
Arguments: 407,Proxy Authentication Required
We had something similar at a different client but there was more of a CORS issue. to resolve this I used cloud-front to fake a sub-domain that then accesses the S3 bucket and it solved the issue. I was hoping it would fix it with this client as well but it didnt.
I have tried adding this code to web.config as suggested by a lot of answers
<system.net>
<defaultProxy useDefaultCredentials="true" >
</defaultProxy>
</system.net>
I have read articles about passing a proxy headers with basis authentication using username and password but I am not sure how this would help us. The Proxy server is used by client and any authentication it requires is outside our domain.
**Additional Information**
The Silverlight code references 2 services. One is our wcf service that retrieves all the data for the application. One is The Amazon S3 service that uses the amazon Soap api, the endpoint for which is at http://s3.amazonaws.com/doc/2006-03-01/AmazonS3.wsdl?
If I go into our app and only use part of the system that dont make any calls to the Amazon S3 api the application works fine. As soon as I go to a part of the system that makes a call to the S3, the problem starts. funny enough the call to S3 goes fine and I can retrieve the doc fine but then any calls to our wcf service return 407.
Any ideas?
**Update 2**
Based on comments from Elliot Nelson I check the stack we were using for making http requests in our application. Turns out we are using client http for both http and https requests by default. Here is the code we have in the App.xaml constructor
public App()
{
Startup += Application_Startup;
UnhandledException += Application_UnhandledException;
InitializeComponent();
WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp);
WebRequest.RegisterPrefix("https://", WebRequestCreator.ClientHttp);
}
Now, to understand the differences between clienthttp and browserhttp and when to use them. Also, the potential impacts/issues of switching to browserhttp.
**Update 3**
Is there a way to request browsers to run your in-browser Silverlight application in trusted mode and would it help bypass this issue?
(Answer #2)
So, most likely (for corporate environments like this network), almost nothing can be done without whatever custom proxy settings are set in IE, usually pushed by corporate policy. To take advantage of these proxy settings, you want to use WebRequestCreator.BrowserHttp, which automatically uses the browser's default settings when making requests.
There's a table of the differences between these two clients available in the Microsoft docs. I'm guessing you were using something (maybe setting custom headers or reading the raw response body) that wasn't supported in BrowserHttp.
For security reasons, you can't "ask" the browser what its proxy settings are and use them, so this is a tricky situation. You can specify Browser vs Client handling by domain, or even for a specific request (the same page above describes how); you may be able in this case to get away with just using ClientHttp for your service calls and BrowserHttp for your S3 calls, and avoid the problem altogether!
For next steps, I'd try that approach; if it doesn't work, I'd try switching wholesale to BrowserHttp just to see if it bypasses the proxy issue (there's almost no chance the application will actually work, since you're probably using ClientHttp-only options).
Long term, you may want to consider making changes to your services so they are usable by a BrowserHttp-only application (this would require you to be pretty basic in your requests/responses, but using only BrowserHttp would be a guarantee you'd work in pretty much any corp network).
Running in trusted mode is probably a group policy thing which would require their AD admins to approve / whitelist your app.
I think the underlying issue you are facing is that the proxy requires NTLM authentication and for whatever reason the browser declines to provide your app with that context.
One way to prove that it's an NTLM auth issue is to test with curl - get it to make a req through the proxy, then it should be a bit easier to code to. EG the following curl will get you through 99% of Windows corporate proxies (assuming the proxy is at proxy-host.corp:3128):
C:\> curl.exe -v --proxy proxy-host:3128 --proxy-user : --proxy-ntlm https://www.google.com
NOTE The --proxy-user : tells curl to use the current user session to perform the NTLM challenge.
So if you can get the client to run that, you can at least identify that NTLM works, then it's a just a matter of getting the app to perform the NTLM challenge using the default credentials (which may or may not be provided by the browser session)
Since you described this as a silverlight application, I'm going to assume you can't use classic browser-proxy troubleshooting like "move browser to public network" or "try a different browser", to isolate the problem.
You should try to isolate the proxy server, and have the customer use the required proxy-auth.
The application is making request, but it might be intercepted by a transparent proxy, or the result might be coming from what you consider a web server.
In the early days, the 401 error was pretty strictly associated with web-auth, and 407 was for proxy-auth.
Architecturally, the separation is a convenience, a web server can have both web server, proxy, and reverse-proxy behaviors.
What happens is your customer's environment is making a web connection to the destination, but it receives a HTTP 407 status from some host, probably their network, or sometimes the provider. Almost certainly the request is received not forwarded. The HTTP client your application lives in needs to provide the credentials that host requires. Companies have environments that are complex enough where often your customer will say this is the first time they have heard of this (some proxy-auth is also dynamic or destination specific).
Also, in some corporate environments, the operator will allow temporary or permanent white-listing from the proxy-auth service. You should see if they can do this, even temporarily, to confirm there aren't going to be other problems.
In the end, it sounds like your application might not robustly support proxy-auth, or the proxy-auth type they use in their environment.

How to find out if an app is currently stopped with CloudFoundry in Swisscom Cloud? Header X-Cf-Routererror reliable?

We would like to add a maintenance page to our front-end which should appear when the back-end is currently unavailable (e.g. stopped or deploying). When the application is not running, the following message is displayed together with a 404 status code:
404 Not Found: Requested route ('name.scapp.io') does not exist.
Additionally, there is header present, when the application is stopped (and only then):
X-Cf-Routererror: unknown_route
Is this header reliably added if the application is not running? If this is the case, I can use this flag to display a maintenance page.
By the way: Wouldn't it make more sense to provide a 5xx status code if the application is not started/crashed, i.e. differ between stopped applications and wrong request routes? Catching a 503 error would be much easier, as it does not interfere with our business logic (404 is used inside the application).
Another option is to use a wildcard route.
https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#create-an-http-route-with-wildcard-hostname
An application mapped to a wildcard route acts as a fallback app for route requests if the requested route does not exist.
Thus you can map a wildcard route to a static app that displays a maintenance page. Then if your app mapped to a specific route is down or unavailable the maintenance page will get displayed instead of the 404.
In regards to your question...
By the way: Wouldn't it make more sense to provide a 5xx status code if the application is not started/crashed, i.e. differ between stopped applications and wrong request routes? Catching a 503 error would be much easier, as it does not interfere with our business logic (404 is used inside the application).
The GoRouter maintains a list of routes for mapping incoming requests to applications. If your application is down then there is no route in the routing table, that's why you end up with a 404. If you think about it from the perspective of the GoRouter, it makes sense. There's no route, so it returns a 404 Not Found. For a 503 to make sense, the GoRouter would have to know about the app and know it's down or not responding.
I suppose you might be able to achieve that behavior if you used a wildcard route above, but instead of displaying a maintenance page just have it return an HTTP 503.
Hope that helps!
The 404 Error you see is generated by CloudFoundrys routing tier and is maintained upstream.
Generally if you don't want to get such error messages you can use blue-green deployments. Here is a detailed description of it in the CF docs: https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
An other option is to add a routing service that implements this functionality for you. Have a look at the CF docs for this: https://docs.cloudfoundry.org/services/route-services.html

How to monitor an action by user on Glass

I have a mirror API based app in which i have assigned a custom menu item, clicking on which should insert a new card. I have a bit of problem in doing that. I need to know of ways i can debug this.
Check if the subscription to the glass timeline was successful.
Print out something on console on click of the menu.
Any other way i can detect whether on click of the menu, the callback URL was called or not.
It sounds like you have a problem, but aren't sure how to approach debugging it? A few things to look at and try:
Question 1 re: checking subscriptions
The object returned from the subscriptions.insert should indicate that the subscription is a success. Depending on your language, an exception or error would indicate a problem.
You can also call subscriptions.list to make sure the subscriptions are there and are set to the values you expect. If a user removes authorization for your Glassware, this list will be cleared out.
Some things to remember about the URL used for subscriptions:
It must be an HTTPS URL and cannot use a self-signed certificate
The address must be resolvable from the public internet. "localhost" and local name aliases won't work.
The machine must be accessible from the public internet. Machines with addresses like "192.168.1.10" probably won't be good enough.
Question 2 re: printing when clicked
You need to make sure the subscription is setup correctly and that you have a webapp listening at the address you specified that will handle POST operations at that URL. The method called when that URL is hit is up to you, of course, so you can add logging to it. Language specifics may help here.
Try testing it yourself by going to the URL you specify using your own browser. You should see the log message printed out, at a minimum.
If you want it printed for only the specific menu item, you will need to make sure you can decode the JSON body that is sent as part of the POST and respond based on the operation and id of the menu item.
You should also make sure you return HTTP code 200 as quickly as possible - if you don't, Google's servers may retry for a while or eventually give up if they never get a response.
Update: From the sample code you posted, I noticed that you're either logging at INFO or sending to stdout, which should log to INFO (see https://developers.google.com/appengine/docs/java/#Java_Logging). Are you getting the logging from the doGet() method? This StackOverflow question suggests that appengine doesn't display items logged at INFO unless you change the logging.properties file.
Question 3 re: was it clicked or not?
Depending on the configuration of your web server and app server, there should be logs about what URLs have been hit (as noted by #scarygami in the comments to your question).
You can test it yourself to make sure you can hit the URL and it is logging. Keep in mind, however, the warnings I mentioned above about what makes a valid URL for a Mirror API callback.
Update: From your comment below, it sounds like you are seeing the URL belonging to the TimelineUpdateServlet is being hit, but are not seeing any evidence that the log message in TimelineUpdateServlet.doPost() is being called. What return code is logged? Have you tried calling this URL manually via POST to make sure the URL is going to the servlet you expect?

recieving error The file you are attempting to save or retrieve has been blocked from this Web site by the server administrators.<nativehr>0x800401e6

After building and deploying, checked the solution management from Central administration and it's up, a simple web service method that only created a Document Library list with a few columns when trying to retrieve the wsdl or even just by calling the WS fromt the adress since its a void method I recieve some error:
The file you are attempting to save or retrieve has been blocked from this Web site by the server administrators.<nativehr>0x800401e6</nativehr><nativestack></nativestack>
The very same method runs fine when called from another web service project that is already deployed so there's nothing wrong with the code. I'm most probably doing something wrong but can't figure.
The system is running on a win server 2008 with sharepoint 2010, framework 3.5 and "ANY" cpu mode.
thank you!
[edit]
Managed to get rid of the previous error by removing asmx extention from the blocked file list in central administration now instead I'm recieving a 404 error:
The resource cannot be found.
Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly.
Requested URL: /_layouts1/my2claims/tt_claims.asmx
it must run under the same application pool as SharePoint

How to get/debug request message when calling a Web Service

I have an application that calls a Https web service (as it seems created with java, not sure though). I get an error as response:
"Error on verifying message against security policy Error code:1000"
Now I don't exactly understand the error code and currently cannot find any responsible to answer me correctly. I don't ask for the error ofcourse cause this could be something about certificates, security from server etc.
Though I would like to catch the request client call I make, and see the whole envelope message to compare with a couple of samples I have so I might catch something.
How can I do this....I remember there is a tool that u can do such things when debugging a WCF service call, can this tool be used in this situation? Can someone rember me the name of the tool :)
I created the client using Add Service Reference, from VS 2010 and it created some custom bindings. On these bindings it created this a tag with an attribute decompressionEnabled="true" but I deleted because VS was complaining attribute is not allowed!!!
The documentation I have for these services says about authentication credential inside the message transport object that serialized in the request (requestObject) but refers to another couple of password and username properties I cannot seem to find them. Tried to add the in client.ClientCredentials.UserName.UserName and Password properties, but I get a read only error there (strange not always).
They also mention in the specifications about Connect with SOAP Security Extensions (WS-Security) which I don't understand if me, the client, has to do something from it's side, aren't these supposed to extract in the config file when generated?
Any hints and tips are welcome.
Thank you.