Embedding Postmates Tracking URL - postmates

I was wondering if we could embed Postmates Tracking URL in our web apps/mobile apps (Using webviews).
Right now, a tracking URLs (Ex: Sample Tracking URL) embedding in iframes and other formats are blocked. Is there a different pattern for the URL for embedding purposes? (Like YouTube does).
If not, how often does the rider location coordinates updated via the webhooks?

It is not possible to embed the tracking URL. You will need to implement a custom map using the courier lat/lng that are updated via the webhooks.
The courier location updates are sent every 10 seconds.

Related

How to get ondemand ids for Vimeo API

I am trying to work with the Vimeo API and I cannot figure out how to access the ondemand data.
The endpoint and parameters in the docs require an ondemand_id to work correctly. I assumed this ID would come from any official ondemand page within Vimeo. But whenever I search the ondemand pages of Vimeo and click on a resource, the URL does not contain any numerical ID.
It only contains the root path for the Vimeo website with /ondemand_page_name at the end. This value cannot be the ID since it is a string and not a number. I have looked through the entire page plenty of different times to try to find the ID but cannot seem to find it.
For example, when you visit a normal video page on Vimeo, the URL looks something like this:
https://vimeo.com/272976101
where the number 272976101 is the video_id that can be used within the API to get all the data about this particular video. Instead of this format, the ondemand pages have the format:
https://vimeo.com/ondemand/nebula
where there is no numerical ID within the URL. This is the issue I am having. How would I retrieve the public data about this ondemand page throught the API.
I feel like there may be a very simple solution/explanation to this issue and any help would be much appreciated.
Also, right now I am not using any SDK to access this data. I am strictly trying to figure out how the API works through the built-in client provided within the documentation.
It's undocumented, but you can use the On Demand custom url path as the ondemand_id.
So for your On Demand video at https://vimeo.com/ondemand/nebula, you can make an API request to this path: https://api.vimeo.com/ondemand/pages/nebula.
In the response, you'll see the "uri" value "/ondemand/pages/203314", which you can log on your end and use as the ondemand_id instead of /nebula.
Also note, this should be the same URL as your On Demand settings page: https://vimeo.com/ondemand/203314/settings
I hope this information helps!

Googlebot - Not Seeing Mobile Version - Not Mobile Friendly

I have a website with the Response Header Vary set to "User-Agent". I have verified that none of the JavaScript or CSS code is blocked using the Fetch as Google tool. When looking at the Rendering tab for Googlebot type Mobile:Smartphone it is showing that Googlebot is seeing the normal web version and not the mobile version. It also shows on the Rendering tab that the visitor would have seen the page showing the mobile version correctly.
Google is showing my website as not mobile friendly. But, there is a very nice mobile version of the website for mobile that comes up when I visit with my iPhone or use the Google Chrome simulator. Also, I am not using a second URL for my mobile version (m.mysite.com).
Do I have to convert my website using Bootstrap in order to get Googlebot to see my mobile version and consider my site mobile friendly?
Here are the response Headers:
The website is built using Sitecore. Here is the rule which sends the Mobile version.
I guess that Googlebot is mainly looking if your website is using CSS3 media queries based on the screen width and the meta tag for setting the viewport on mobile devices to decide if a website is mobile friendly or not. I just tested a not even finished layout which is marked as mobile friendly. Also my other sites are all mobile friendly and i never used the user agent to determine how i should render the page, for this I use media queries.
I suggest you, if not already done, adding first the viewport meta tag, and if this doesnt work it MUST work when you switch to using media queries in general and using the user agent only when you want to prevent that media queries are being used. (for example not using the mobile media query when on a desktop)
What you need...
Viewport meta tag (for example: <meta name="viewport" content="width=device-width, initial-scale=1">)
Media queries
My unfninished layout doesnt even use media queries so i think adding the meta tag is enough. If you want an URL with the code write me a PM.

iFrame srcdoc - possible to use dynamic content / regex? HTML5

I know the srcdoc in iframes is relatively new (hence why I've been unable to find much iformation on it), but I've been wondering (and hoping!) if it's possible to use it dynamically, i.e. to take a section of a website and display it in an iframe?
For example, say I'd want to take just the breaking news section of the BBC website where the content keeps changing and put that in an iframe, just that bit, is the srcdoc able to do that?
This is not possible, and this is also not what srcdoc is designed for. The srcdoc attribute is available in WebKit browsers to set the content of an iframe rather than have it load a separate page from the server. Learn more about it at w3schools.
If you have an iframe with a page from another server (like the BBC website) then you have no way of accessing that page to extract or manipulate its contents because of the browser's same-origin policy.
The way to do it is having a server-side script downloading the BBC page, manipulating it and sending it to the client.

Is QtWebkit needed to fetch data from websites that need login?

As the title implies,
I need to fetch data from certain website which need logins to use.
The login procedure might need cookies, or sessions.
Do I need QtWebkit, or can I get away with just QNetworkAccessManager?
I have no experience at both, and will start learning as I go.
So please save me a bit of time of comparing both ^^
Thank you in advance,
Evan
Edit: Having read some related answers,
I'll add some clarifications:
The website in concern does not have an API. So I will need to scrape web elements for the data myself.
Can I do that with just QNetworkAccessManager?
No, in most cases you don't need a full simulated web browser. In most cases, just performing the same web requests like a web browser would do is enough.
Try to record the web requests in your browser, using a plugin like "HTTP Live Headers" or "Firebug" in Firefox. I think Chrome provides a similar tool out of the box. These tools record the GET and POST requests done by the website when you send a form in the webpage.
Another option is to inspect the HTML code of the login page. Find the <form> tag and its fields. Put them together in a GET / POST request in your application to simulate the same form.
Remember that some pages use randomized "tokens" in their forms, some set the tokens as cookies. In such cases, you need to request the login page itself in your application first (before sending the filled in form). Both QWebView and QNetworkAccessManager have cookie support.
To sum things up, I think QWebView provides a far more elegant way to simulate user interaction with a web page. The manual way is, however, more "lightweight", as you don't need Webkit and your application might be faster (because only the HTML page is loaded, without any linked resources like images, CSS, javascript files).
QWebView as class name states is a view, so it views something (in this case web pages). If you don't need to display loaded page, then you don't need a view. QNetworkAccessManager may do the work, but you need some knowledge about HTTP protocol, and also anything about target site: how does it hande logins, what type of request you have to send to login etc.

Retrieve information from URL to share it on my website

I am about to develop a new feature on my website that allow the user to give me a URL then I would use this URL to get the site title, description and image(s) so that I store these information on my website. I need to know if there is any script that can do that or if there is a web service that would take the url and give me the information I need or shall I start developing this from scratch.
Also, I would like to know if there is any kind of standards used in the information sharing mechanism as I want to allow the user to share a video or photo from the web.
There is no single script that can extract information from all sites, because the source HTML for most websites is different. You will need to write code specifically for the sites you are scraping.
As for syndicating the content, you can use RSSĀ (Really Simple Syndication), which is an XML format commonly used for sharing content.