Does the FedEx shipping API have a SOAP endpoint? I'm unable to find the WSDL endpoint.
Their site has changed since the other 2 answers here.
The most direct answer is that they do not have a classic WSDL endpoint, where you just get a URL, Add Service Reference and go.
Instead they have you download a zip file which contains a wsdl file which you then use locally - pretty odd. That wsdl file changes name over time, as does the zip, with every version - but the current file is at:
https://images.fedex.com/templates/components/apps/wpor/secure/downloads/wsdl/201607/standard/RateService.zip
My guess is you can get at it without logging in given the images subdomain, likely a CDN.
Once that link dies here's how you currently navigate their obtuse Developer section, which they'll probably also change again:
http://www.fedex.com/us/developer/
Click FedEx Web Services on the left
Under the unclickable "Document and Downloads" part of the page, click Move to Downloads
Scroll to the bottom - there's a weird table with service names like "Quote Rates." Each time you expand a row, the header row will have some Download text stuffed into it. Clicking "WSDL" gets you the zip file.
Not the worst process for getting a simple WSDL I've seen but, they're definitely in the running.
Yes it would appear so.
http://www.fedex.com/us/developer/solutions.html
I am not sure if they have the WSDLs available remotely, but they do provide the WSDLs for download on their Technical Resources page on their developer solutions page, to which you need a login with FedEx.
You can use this endpoint
https://wsbeta.fedex.com:443/web-services
i found it from here
https://stackoverflow.com/a/57176378/5374995
Or you can find endpoint on the bottom of WSDL file with name="RateService"
Related
I created a new service account and I was trying to generate a key for it. I selected "Create Key", chose "JSON", and then this happens:
Download image
The download just doesn't happen (0kb size and host is null), I've tried on different browsers, different computers and I don't know what to do, the problem doesn't seem to be on my side...
What can I do about this?
I just had this same problem.
In order to access the Service Account Key file you need to ensure that you can use your API Key (under your Project Credentials), which may be restricted.
My API Key was restricted to my IP address and to the specific API I wanted to use.
After removing the API restriction only then did the download work (see below).
Hope this helps.
UI showing API restrictions removed
I also had the same problem but got solved by changing my browser.
Since I was working with the brave browser and was unable to download this file.
After that, I changed my browser to chrome. The problem was solved.
SOMETIME BROWSER CAN HAVE SOME ISSUE.
In PowerBI, I'd like to get data from a website requiring authentication (http://kdp.amazon.com/). Going to New Source, Web, Advanced, doesn't show me anything that looks promising. Hopefully I'm missing something.
My ideal would be to go to a specific webpage (post authentication), and click on a link that allows me to download an excel spreadsheet.
Thanks for any ideas/pointers.
It depends, and chances are slim for your case.
If it is a direct URL to where the data or file resided (e.g data is on the page, file link, web API endpoint), then it depends on what kind of authentication method is used by the website, and whether you can provide the credentials through the Web.Contents options. (commonly used for web API authentication)
If it requires further navigation (e.g. click, type in info) to access the data / file after the authentication, then the answer is no.
That type of data scraping can be accomplished using a headless browser and a scripting/macro engine.
For example xvfb (X virtual framebuffer) + firefox + iMacros. I do consider this beyond power bi's capabilities. If you wish to pursue this further here are some references:
https://en.wikipedia.org/wiki/Xvfb
https://addons.mozilla.org/en-us/firefox/addon/imacros-for-firefox/
Again, similar but using an alternate toolset:
http://scraping.pro/use-headless-firefox-scraping-linux/
BTW, having done this once or twice before - this is not a great value proposition. If you have to resort to this sort of tactic, it may be time to consider why the developers didn't expose this functionality to you in an API - maybe there is a good reason?
I have been around the web a lot in the past day and was wondering does Amazon API have a .wsdl file? Im struggling to understand how I talk to their web service.
I've downloaded https://developer.amazonservices.com/gp/mws/api.html/180-1400280-4320051?ie=UTF8§ion=feeds&group=bde&version=latest
And have also signed up for MWS. As well as found this wsdl file but this seems to relate to something else as I cant find reference to anything I found related to MWS such as OperationType. http://webservices.amazon.com/AWSECommerceService/2013-08-01/AWSECommerceService.wsdl
However at no point does it seem to reference a wsdl file.
Am I missing something?
All feed information can be found:
PRODUCTS
https://images-na.ssl-images-amazon.com/images/G/01/rainier/help/XML_Documentation_Intl.pdf
ORDERS
https://images-na.ssl-images-amazon.com/images/G/02/mwsportal/doc/en_US/orders/2011-01-01/MWSOrdersApiReference.V361506650.pdf
Basically you do the following:
Create your XML document (if required)
Create a webrequest - making
sure the query string contains all the elements required and nothing
more or less (this is very important), the order of these is also
important. Use:
https://mws.amazonservices.co.uk/scratchpad/index.html to work out
which elements are required
Sign the request
Receive a response.
Thanks,
Clare
USING: Windows7, Python 2.7, Google App Engine
Google's documentation for inserting(creating) a file to Google Drive using Python and the Drive API. Here is the link showing the code near the bottom of the page:
Write a file to a Google Drive using Python
A function named: insert_file is defined in the Python module.
def insert_file(service, title, description, parent_id, mime_type, filename):
The insert_file function takes 6 arguments passed into it. The first arg is service.
In the comment section of the example code, it is indicated that the service arg takes the Drive API service instance as the input.
Args:
service: Drive API service instance.
title: Title of the file to insert, including the extension.
description: Description of the file to insert.
parent_id: Parent folders ID.
mime_type: MIME type of the file to insert.
filename: Filename of the file to insert.
What is the Drive API service instance? I have no idea what that is or what the valid settings are. Is it the authorization scope that is expressed as a URL? I do know what the title and description are. The title is the new name of the file being written, and the description is a detail, presumably put into the files metadata. I'm not sure how to get the parent_id or the Parent folder either. How is that info obtained? Do I get that manually from Google Drive? I know what the MIME type setting is.
If someone could give an explanation of what the Drive API service instance is, and give an example, that would be great. I did a search for Drive API service instance, and couldn't find an explanation. I searched the internet. I searched Google Developers. I found nothing.
Quickstart provides more boilerplate and a full working walk-through.
http = httplib2.Http()
http = credentials.authorize(http)
service = build('drive', 'v2', http=http)
The service is the API service that you want to instantiate. There are lots of services. An app can communicate with Google Maps, or Google tasks, or email, or Drive.
Google API's for Python
So, the service is the API service. Build instantiates the API service. This is from the video, minute 12:46.
YouTube example for Google Drive API Service
I found something about Parent Folders in the documentation.
Google Drive API
The Google Drive API has a files:insert API. The files:insert API makes a request with various parameters. There is, what is called, the Request body which has it's own parameters. One of the parameters for the Request Body is parents[]. It is an optional parameter. For insert, if the parents[] parameter is blank, the file gets created in the users root directory. So, I guess if you want the file to be written to a particular folder, you need to give the parents[] parameter a name. I'm assuming that is what the parent_id arg in the insert_file function is for, but I'm not sure. I need to look at the actual function, but that's not given.
After doing searches on Parent ID it looks like that is the folder ID. When you go to your Google Drive, and click on a folder, the URL in the browsers address field changes. Just click on the folder and the URL will look something like this:
https://drive.google.com/?tab=wo&authuser=0#folders/0B52YKjuEE44yUVZfdDNzNnR3SFE
The parentID is the long part on the end after the forward slash.
I guess I need to look at the Google Quickstart files again.
There are at least three examples that I've found:
Quickstart example. Google Drive SDK
Dr Edit. Google Drive SDK examples
Another Quickstart example Google Drive API
The first one is the simpliest. Dr Edit has the most files maybe? The last one looks like its more current? I don't know. It's kind of confusing about which example to use. The Drive SDK and the Drive API examples only deal with authorization of an account for some outside app to access a users account.
I am working on a website that generates traffic for partner sites. When a partner site's logo is clicked on our site we open the partner site in a page that contains our basic header and the partner site within an iframe. Earlier we were simply opening the partner site in new window. All cool so far.
Most partner sites use google analytics to track the traffic that we send them and soon after we started opening sites within iframe our partners reported that google analytics does not track data anymore (or tracks just a fraction of data).
I have done my fair share of homework/research on googleverse and found the know issue with google analytics or cookies in general across domains and iframes.
I am trying to resolve this issue and the only solution that has been referenced is the use of P3P headers.
First, where do the P3P headers go? In my sites pages or the partner sites pages. Since we have many partner sites (big and small) it wont be practical if the solution is to put tags in each of these sites. I can easily have them added to the page that contains the iframe.
Among the various p3p header generators is there a reliable one that you recommend?
Is there any way around this issue? I really need to open the sites in iframes and obviously the partner sites really need to track the traffic.
Thank you for the help.
Unfortunately, both you and the partner site needs to set the headers.
Alternatives:
If you do not want the partner site to set headers, one option is to lower the security level (in IE) or grant access to 3rd party cookies (in FF) in the browser settings. Every client has to do this, so this may not be an attractive solution.
Use localStorage (HTML5 thingy - browsers that support localStorage allow access to both the site and the iFrame's content that is stored in localStorage). This may not be feasible in the short term as it requires both you and your partner site to implement saving/reading information to/from localStorage and not every browser supports it (older IE browsers especially).
To add a basic policy header (ideally you should generate your own policy which is straight forward - check item#2 below)
in php add this line:
<?php header('P3P: CP="CAO PSA OUR"'); ?>
in ASP.Net:
HttpContext.Current.Response.AddHeader("p3p", "CP=\"CAO PSA OUR\"");
in HTML pages:
<meta http-equiv="P3P" content='CP="CAO PSA OUR"'>
Regarding your other concerns:
1) P3P headers refer to the HTTP header that delivers something called a compact policy to the browser. Without such a policy in place, IE (most notably) and other browsers will block access to 3rd party cookies (a term used to refer to iFrame's cookies) to protect user's privacy concerns.
As far as Google Analytics goes, both your site and the partner site still needs to configure cross domain tracking as outlined in their documentation.
2) You can use this basic policy header (which is enough to fix iFrame's cookies):
P3P: CP="CAO PSA OUR"
or generate your own. If you're not sure what those terms mean, see this.
To generate such policy, you can use online editors such as p3pedit.com or IBM's tool which present a set of questions and allow you to present answers. This makes it easy for you to quickly generate such policy. You can generate the policy XML, compact policy and more.
3) You can try the two alternatives mentioned above.
Steps to add the policy to your entire site
Generate a compact policy (using one of the tools mentioned earlier) or use the basic policy
In IIS, right-click the desired page, directory, or site, and then click Properties.
On the HTTP Headers tab, click Add.
In the Custom Header Name field, type P3P.
In the Custom Header Value field, enter your Compact P3P Policy (or the basic one from above) and then click OK.
In Apache, a mod_header line like this will do:
Header append P3P "CP=\"CAO PSA OUR\""
Hope ths helps.