I'm not seeing my isomorphic-fetch based XHRs show up in the mini-profiler.
My page response headers:
Content-Type:text/html; charset=utf-8
Date:Fri, 14 Jul 2017 11:23:07 GMT
Server:Kestrel
Transfer-Encoding:chunked
Vary:Accept-Encoding
X-MiniProfiler-Ids:["16d0cc1e-9881-403e-a73c-85103e74a52f","803894bc-219e-4011-92c4-9838d8005827","58ee3691-2e1d-4592-b4b1-a1a2f0eb4b61"]
X-Powered-By:ASP.NET
X-SourceFiles:=?UTF-8?B?QzpcY29kZVxvdGhlclxwbGF5LXNzclxmZXRjaGRhdGFcNQ==?=
My fetch response headers:
Content-Type:application/json; charset=utf-8
Date:Fri, 14 Jul 2017 11:23:19 GMT
Server:Kestrel
Transfer-Encoding:chunked
X-MiniProfiler-Ids:["6bcaaaa2-9ad8-42b1-8123-5c12d22a243e","fdfddce8-fc0f-4106-bbab-8de03b22c2e5","dc24b210-8079-41ee-a231-d84d6d1401e3"]
X-Powered-By:ASP.NET
X-SourceFiles:=?UTF-8?B?QzpcY29kZVxvdGhlclxwbGF5LXNzclxhcGlcU2FtcGxlRGF0YVxXZWF0aGVyRm9yZWNhc3Rz?=
Should I be expecting some type of overlap between the two X-MiniProfiler-Ids?
If so, any suggestions for tracking this down further?
The issue here is we're not listening to the fetch API in general (but are for popular JS frameworks) in the MiniProfiler client-side JS. In effect, we just never observe that header coming back to trigger a fetch on.
I think the best route here would be starting a discussion in a MiniProfiler issue so we can decide the best generic way to support this case. I'm 100% for it, we just need to make sure we don't break anyone in the process.
Related
I followed all of the steps in the getting started page from the Instagram Messaging docs found here (https://developers.facebook.com/docs/messenger-platform/instagram/get-started). I even enabled message control tools and was able to successfully perform GET requests on all steps mentioned in the docs except for GETing the conversations? from the Graph API.
My request was
curl -i -X GET
"https://graph.facebook.com/v9.0/xxxxx/conversations?platform=instagram&access_token=EAA..."
And my response was
HTTP/2 500
content-type: application/json; charset=UTF-8
access-control-allow-origin: *
facebook-api-version: v13.0
strict-transport-security: max-age=15552000; preload
pragma: no-cache
cache-control: private, no-cache, no-store, must-revalidate
expires: Sat, 01 Jan 2000 00:00:00 GMT
x-fb-request-id: AWxxxx
x-fb-trace-id: Gxxxxx
x-fb-rev: 1xxxxx
x-fb-debug: Icxxxxx
content-length: 77
date: Tue, 21 Jun 2022 04:11:42 GMT
alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400
{"error":{"code":1,"message":"An unknown error occurred","error_subcode":99}}
I'm wondering why since I followed everything up to here and it was working. Any suggestions on what I could've missed or did wrong? Thanks
Missing permissions can cause this issue. It is listed as the first issue under the error codes section. Make sure you give the user access token appropriate permissions before you create the token. It says moderate but the permissions required are:
From Facebook Login:
instagram_basic
instagram_manage_messages
pages_manage_metadata
Remember, your Facebook Developer account must be able to perform Tasks with atleast "Moderate" level access on the Facebook Page connected to the Instagram account you want to query.
From 4am this morning. Two of my webjobs that have been running quite happily for months every 2 minutes are now broken. The error is:
Http Action - Response from host
'*******************.scm.azurewebsites.net': 'NotFound' Response
Headers: Pragma: no-cache x-ms-request-id:
d719e8d0-429d-4ba3-86de-a732e54dbd4f Cache-Control: no-cache Date:
Wed, 21 Sep 2016 21:20:01 GMT Set-Cookie:
ARRAffinity=8f119d7b3e71f6a6a4d78b9eebbac59d8f13ae47ad9ddc5efdc9151826e5ad57;Path=/;Domain=********************.scm.azurewebsites.net
Server: Microsoft-IIS/8.0 X-AspNet-Version: 4.0.30319 X-Powered-By:
ASP.NET Body: "No route registered for
'/api/triggeredwebjobs/batch/run%3Farguments=job-steve'"
https://github.com/projectkudu/kudu/wiki/WebJobs-API#invoke-a-triggered-job
http://blog.davidebbo.com/2015/05/scheduled-webjob.html
I am using David Ebbo's solution in the above link and also adding parameters as outlined on the project website.
As we discovered, the root issue was that the '?' in the URL was encoded as %3F, instead of just being ?. Fixing the URL in the scheduler addressed the issue.
What's not clear is what caused it to be that way if it used to work. That could be some kind of scheduler or portal issue. But at least we know it's not something related to WebJob itself.
It looks like you have a mismatch between the name of the WebJob that your scheduler is trying to invoke (batch), and the actual name of your WebJob in your Web App (PyramisBatch). So the error is expected.
Can you change you scheduler to hit the right WebJob?
I had the same issue with my web api app. Restarting the app fixed it.
We are working on product which uses Azure storage service for storing data.
We are using Azure REST API through C++ to communicate with Azure. We are using cURL to execute REST request.
Right now, we are working on functionality to list blobs, but its failing with error
<?xml version="1.0" encoding="utf-8"?>
<Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate
the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:16cd7e3d-0001-0032-2dd6-6f2e4f000000
Time:2016-02-25T14:14:23.2377982Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request 'CyPhz
sBdBCRRg2w157IYY4sIB23XwzKsfdAaUTVCAts=' is not the same as any computed signature. Server used following string to sign
: 'GET
x-ms-date:Thu, 25 Feb 2016 14:16:20 GMT
x-ms-version:2015-02-21
/sevenstars/container2
comp:list
delimiter:/
maxresults:2
restype:container'
</AuthenticationErrorDetail></Error>
======================
Following is the wireshark output that we observed
GET /container2?comp=list&delimiter=/&maxresults=2&restype=container HTTP/1.1
Host: sevenstars.blob.core.windows.net
Accept: */*
x-ms-date:Thu, 25 Feb 2016 14:16:20 GMT
x-ms-version:2015-02-21
Authorization:SharedKey sevenstars:CyPhzsBdBCRRg2w157IYY4sIB23XwzKsfdAaUTVCAts=
HTTP/1.1 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Content-Length: 704
Content-Type: application/xml
Server: Microsoft-HTTPAPI/2.0
x-ms-request-id: 16cd7e3d-0001-0032-2dd6-6f2e4f000000
Date: Thu, 25 Feb 2016 14:14:22 GMT
...<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:16cd7e3d-0001-0032-2dd6-6f2e4f000000
Time:2016-02-25T14:14:23.2377982Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request 'CyPhzsBdBCRRg2w157IYY4sIB23XwzKsfdAaUTVCAts=' is not the same as any computed signature. Server used following string to sign: 'GET
======================
As per suggestions on Microsoft forum. I ensured all parameters are set correctly. (: is used instead of = in string to sign)
Can you please let us know that how can we resolve this issue?
Your help is much appreciated.
Thanks and regards
Rahul Naik
I'm having a problem allowing users to download items from the Sitecore media library, specifically.
I have a link to a media item (xls, pdf etc) on a page, when a user clicks on the page the file should be downloaded.
This works fine on our test sitecore instance, but when we try it on our live instance, the file starts to download OK but then seems to be truncated. (both instances are located on the same IIS server)
Using Fiddler, I can see that the downloads response body is truncated at 784kb.
HTTP/1.1 200 OK
Cache-Control: public, max-age=604800
Content-Type: application/vnd.ms-excel
Expires: Fri, 25 Mar 2011 14:12:48 GMT
Last-Modified: Fri, 18 Mar 2011 11:17:45 GMT
ETag: 050b2f8a408b47c49fefbe28b5ec9661
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET; Sitecore CMS
X-Powered-By: ASP.NET
Content-Disposition: attachment; filename="Filename.xls"
Date: Fri, 18 Mar 2011 14:12:48 GMT
Content-Length: 804795
(the file is actually 5019136kb!)
IF any body can shed any light, I'd be eternally grateful!
Yours in desperation!
Pete
--UPDATE--
Think I might be getting closer to the cause of this.
By closer examination of the response I'm getting back from the server
Response denied by WatchGuard HTTP proxy.
Reason: chunk-size line too large line='‰u¯^%|\x0c\x04‡V–\x15ÿ\x00¾c*?Jã5]cW×o[R×5K›Ë‡ûóÝÎÒ;}Y‰&«Q]4èQ¥ðE/D‘ÅW\x13ˆ®ïRn^¿Ì(¢ŠÔÄ(¢Š\x00(¢Š\x00(¢Š\x00(¢¾…ý•?àœ¿\x1bi\x19'
Mystery 1 Solved - The reason it was working on the test site was because I wasn't going via the proxy server!
Mystery 2 - Why is the chunk size too large!!!!?
Pete
The first thing I would check is the <httpRuntime maxRequestLength="" /> element/attribute in your web.config file. You'll want to ensure that the 'maxRequestLength' attribute value is set to a value large enough to accommodate the size of the files that you're serving.
Beyond that, are you generating the response headers yourself (i.e. in your code)? For instance, are you explicitly setting the Content-Disposition header and the Content-Length headers? If so, I would suggest verifying that the method you're using to compute the Content-Length is accurate.
Lastly, verify that the IIS configuration is the same between both Sitecore instances. Are you using IIS6, IIS7 or IIS7.5?
cheers,
adam
Have you done a web.config comparison between the two instances?
Have you asked about this on the SDN forums?
This problem sounds so familiar. I'm sure that I've seen it before... Are you sure it's truncating the return? Or is it that there is 16kb of unwanted junk in the header/data? I want to say that's the issue I've seen before... but can't remember for sure. Brain cells are tingling, give me some time.
I've done plenty of ASP.NET and PHP development, but I'm less familiar with how to track this sort of thing down in CF. My naive first angle of attack was to search for any reference to Google in any of the source code. No luck.
I'm running the site on IIS7. Google, Bing and Yahoo all apparently "see" nothing on my site.
Update: I ran Fetch as Googlebot and got the following:
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
Server: Microsoft-IIS/7.0
Set-Cookie: CFID=1638251;expires=Sat, 14-Apr-2040 15:51:41 GMT;path=/
Set-Cookie: CFTOKEN=35688222;expires=Sat, 14-Apr-2040 15:51:41 GMT;path=/
Set-Cookie: LANGUAGEID=1;expires=Sat, 14-Apr-2040 15:51:41 GMT;path=/
Set-Cookie: CFGLOBALS=urltoken%3DCFID%23%3D1638251%26CFTOKEN%23%3D35688222%23lastvisit%3D%7Bts%20%272010%2D04%2D22%2008%3A51%3A41%27%7D%23timecreated%3D%7Bts%20%272010%2D04%2D22%2008%3A51%3A41%27%7D%23hitcount%3D2%23cftoken%3D35688222%23cfid%3D1638251%23;expires=Sat, 14-Apr-2040 15:51:41 GMT;path=/
X-Powered-By: ASP.NET
Date: Thu, 22 Apr 2010 15:51:40 GMT
Use Google Webmaster Tools "Fetch as Googlebot" (its in labs) to see exactly what your server is returning to Google.
It turned out to be a convoluted application.cfm page.
It turns out it didn't work without cookies. Oh the joys of maintaining an old, rusty website! It's not the type of website (in terms of content and overall purpose) I would have expected to completely fail if cookies were disabled.
Being a newbie to CF, I mistakenly assumed that my simple "example.cfm" would only execute code on that page. I wasn't aware of the application.cfm. I checked for includes and saw nothing. That's when I hunted through the trace using IIS7's Failed Request Tracing capability. By comparing the googlebot request with a normal browser request, I became certain that nothing strange was happening at that level. There wasn't any rouge module being loaded that was messing with my request.