CloudKit Dashboard: Deploy Schema to Production fails with "There was a problem loading the environment’s status" - icloud

Note: This is not new, but I have some new insights on it.
For about three weeks now I regularly try to deploy the development-schema of my CloudKit Container to production, using the CloudKit Dashboard:
It spins for exactly a minute to then tell me "There was a problem loading the environment's status"
This is not new, many other questions face this as well:
Error CloudKit Dashboard - There was a problem loading the environment's status
Does iCloud need to be in the Production environment in order to use in Production?
iCloud dashboard: Cannot deploy CloudKit schema to Production
Apple support told me to
look at https://developer.apple.com/forums/thread/656723 (try again after a day with stable network)
use Safari and resetting browser settings to clear cache and cookies
"You may also try creating a new CloudKit container, rebuilding your schema, and then try again." => obviously doesn't work, because users have data on production

TL;DR:
Kill the timeout by running this in the console:
var id = window.setTimeout(function() {}, 0);
while (id--) {
window.clearTimeout(id); // will do nothing if no timeout with id is present
}
(the response is undefined — that's okay)
How I got there
So I started to look at the requests the site makes to the backend when I click "deploy". Chrome shows that the request to
https://p39-ckdatabasews.icloud.apple.com/r/v3/user/<container-name>/production/public/admin/deployment/status?team_id=<team-id>
is cancelled after 1.0 min.
Insight 1
The problem is with the production schema. I had used the Reset Development Environment before to make sure I hadn't messed that up myself, but this would have spared me that.
I used the Copy as cURL command (in Chrome, because it also copies the auth cookies, which Safari does not) and ran it in Terminal.
Interestingly, that does respond after 1'37 min. That's also what the X-Apple-Edge-Response-Time: 97244 header says.
If you know what to look for, the console will also tell you the the request timed out:
Insight 2
The server takes too long to respond (> 1min) and the client script times out (at 1 min)
Note: You can also get a response by right-clicking the request in Chrome and choosing "Replay XHR".
Solution
I tried to understand the JavaScript that sends the XHR request and modify the timeout, but I failed. However, you can apparently clear all timeouts that exist with
var id = window.setTimeout(function() {}, 0);
while (id--) {
window.clearTimeout(id); // will do nothing if no timeout with id is present
}
(from https://stackoverflow.com/a/8860203)
Running that while waiting for the response actually worked for me!

Related

Amazon FEED _GET_XML_RETURNS_DATA_BY_RETURN_DATE_

I try to get return report from amazon, but my request is always cancelled. I have working request report using
'ReportType' => 'GET_MERCHANT_LISTINGS_DATA',
'ReportOptions' => 'ShowSalesChannel=true'
I modify it by changing ReportType and removing ReportOptions. MWS accept request by its always cancelled. I also try to find any working example of it on google but also without success. Meybe somone have working example of it? I can downolad report when I send request from amazon webpage. I suppose it require ReportOptions, but I dont know what to put in this place (I have only info ReportProcessingStatus CANCELLED). Normally I choose Day,Week,Month. I check on amazon docs but there isnt many informations https://docs.developer.amazonservices.com/en_US/reports/Reports_RequestReport.html
Any ideas?

npm:youtube-dl and Lamda HTTP Error 429: Too Many Requests

I am running an npm package: youtube-dl through a Lambda function as I want to create an online convertor.
I have suddenly started to run into the following error message:
{
"errorMessage": "Command failed: /var/task/node_modules/youtube-dl/bin/youtube-dl --dump-json --format=best[ext=mp4] https://www.youtube.com/watch?v=MfTbHITdhEI\nERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n",
"errorType": "Error",
"stackTrace": ["ERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.", "", "ChildProcess.exithandler (child_process.js:275:12)", "emitTwo (events.js:126:13)", "ChildProcess.emit (events.js:214:7)", "maybeClose (internal/child_process.js:925:16)", "Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)"]
}
Edit: I have run this a few times when I was testing the other day, but today I only ran it once.
I think that the IP address used by my Lambda function has now been blacklisted. I'm unsure how to proceed as I am a junior and very new to all this.
Is there a way to resolve this? Can I get a new IP address? Is this going to be super costly?
youtube-dl lack of delay (limit of request per time) option.
(see suggestion it the bottom of my post).
NEVER download more than one video with youtube-dl.
You can search youtube-dl author contact (e-mail etc) and write them directly, also as open issue on github page regarding it. as more request they have as fast they be pleased to fix it.
Currenty they have planty same request on this issue in gitlab but they hardly to block discussions and close tickets by this problem.
This is some sort of misbehaviour I believe.
I also found that developer suggest to use proxy instead of introducing delay option in his code - extremely funny.
OK, re to use proxy - but this actually does not solve the problem since it is lack of program design and no matter you use proxy or not YouTube limits is still here.
Please note:
This cause not only subj error but blocking your IP by YouTube.
Once you hit this situation YouTube will block your IP as a suspicious again and again even with a small requests amount. this cause tremendous problems since IP marked as suspicious.
Without limiting request per time option (with safe value by default) I consider youtube-dl as a dangerous software should cause problems and I stopped using it until this option will be introduced.
RECOMENDATIONS:
Use Ctrl+S (suspend) , Ctrl+Q (resume) when youtube-dl collecting digest for many videos (when you already donloaded many videos of channel but new one still there). I suspend it for a few minutes after eatch 10.
And use --limit-rate 150K (or as low as it sane), this may help you to not hit the limit since whole transmission is shaped.
Ok, so I found this response: https://stackoverflow.com/a/45339683/9793169
I am wondering if it's possible that our because our volume is low we just always end up using the same container hence the same IP address?
Yes, that is exactly the reason. A container is only spawned if no containers are already available. After a few minutes of no further demand, excess/unneeded containers are destroyed.
If so is there any way to prevent this?
No, this behavior is by design.
SOLUTION:
I logged out for 20 minutes and went back to the function and ran it again. It worked
Not my solution, it took me a while to understand what he ment (reading is an art). It worked for me.
(see: https://askubuntu.com/questions/1220266/youtube-dl-do-not-working-http-error-429-too-many-requests-how-can-i-solve-this)
You have to use the option --cookies in combination with a current/correct cookie file.
Here the steps I followed
1. if you use Firefox, install addon cookies.txt, enable the addon
2. clear your browser cache, clear you browser cookies (privacy reasons)
3. go to google.com, and log in with your google account
4. go to youtube.com
5. click on the cookies.txt addon, and export the cookies, save it as cookies.txt (in the same directory from where you are going to run youtube-dl)
6. this worked for me ... youtube-dl --cookies cookies.txt https://www.youtube.com/watch?v=....
Hope it helps.
use --force-ipv4 option in command.
youTube-dl --force-ipv4 ...
What you should do is handle that error by retrying the requests that are throttled.

Facebook API calls rate limit reached

3 days ago we received an alert from the facebook developers page inform us that one of our apps had reached 100% of the hourly rate limit. Our application had an error that caused the increase in calls to the APIS that we solved yesterday afternoon. Since that we deployed the fix we see that in API calls graph (graph: "Application Level Rate Limiting") we don't reach the limit but the calls to the facebook APIS still failing. We want to know if there is a period of time to recover access to the APIs after not reaching that limit.
Here you can see a screenshot of the alert:
alert
In the response headers of one of the calls, we receive this error:
Status code: 403
Header name: WWW-Authenticate
Header value: OAuth "Facebook Platform" "invalid_request" "(#4) Application request limit reached
You can see the header here
You are not the only one right now:
https://developers.facebook.com/support/bugs/169774397034403/
But i suppose it should be gone after a day or a few hours, in my experience, sometimes i can make a few calls and then it shuts me off again, while our application is not that api call intensive.
This is the response from Facebook:
Dear all,
We checked with our rate limiting team who confirmed that several
improvements were made to help you troubleshoot rate limit related
error messages. For example, we've fixed an existing graph and added a
new one in the app dashboard to give you more info about the status of
your app.
If you continue to receive error code #4 in your request, we'd
appreciate it if you can create a new bug report because this thread
is getting rather long. We'll be happy to analyze each individual case
for you if you can provide the following info:
your app id the entire error message include the trace id a screenshot
of the graphs on your app dashboard
Thanks for your patience while we looked into this.
Xiao

FusionReactor ENT v5 WebRequest Runtime Protection Emails not working

I have FusionReactor ENT v5 on my new server,
I have FusionReactor STD Edition v.5 on my old server.
The only problem I am having is that the WebRequest Runtime Protection is not working.
I have checked the settings,
http://docs.intergral.com/display/FR50/Protection+Settings
Request Runtime Protection Strategy
This defines what happens when this protection type is triggered. The individual survival strategies are defined as follows:
Abort (with Email Notification): Protection will attempt to abort any requests that have run for too long and have triggered Request Runtime Protection. Optionally sends an email notification containing details about the triggering request.
Email Notification Only: Send an email notification (as long as notification is enabled in FusionReactor Settings) but take no further action.
My reactor.conf from my old server:
fac.archive.retention.value=100
crashprotection.pagelist.0.track_stats=true
user.0=Administrator,administrator,XXXXXXXXXXXXXXXXXXXXXXXX,?p\=running&static\=&flavor\=WebRequest&__toc\=requests
crashprotection.pagelist.0.string=/directory1/directory2/SiteFile1.cfm
crashprotection.pagelist.1.string=directory1/directory2/SiteFile2.cfm
crashprotection.pagelist.count=2
crashprotection.email.address.to=TEST#domain.com
crashprotection.pagelist.1.scope=ALL
version=7
crashprotection.pagelist.0.scope=TIMEOUT
fruid=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
crashprotection.pagelist.1.track_stats=false
crashprotection.pagelist.1.regex=false
crashprotection.pagelist.0.regex=false
fac.archive.retention.strategy=SIZE
crashprotection.email.active=true
crashprotection.pagelist.0.append_parameters=false
crashprotection.requests.level.min=5
crashprotection.pagelist.1.prepend_hostname=false
crashprotection.pagelist.0.prepend_hostname=false
crashprotection.pagelist.1.append_parameters=false
fac.scheduler.mailjob.enable=true
crashprotection.email.server=127.0.0.1
crashprotection.request_timeout=60
crashprotection.email.address.from=fusionreactor#domain.com
gruid=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
My reactor.conf from my new server:
user.0=Administrator,administrator,XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
metrics.slow_threshold=2
crashprotection.email.active=true
crashprotection.email.server=127.0.0.1
crashprotection.request_timeout=10
email.hostname=local.domain.com
crashprotection.email.address.from=fusionreactor#domain.com
version=6
crashprotection.requests.level.min=5
metric.recent_slow_pages.statusthreshold.ok2w=1
crashprotection.email.address.to=testuser#omain.com
gruid=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
fruid=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The test email works fine and the crash notication email works fine.
The slow web request does not
By the looks of things your new server config looks ok for the CrashProtection settings. It might just be that the Protection system has not taken the settings correctly.
Have your tried restarting your server?
It looks like there is a bug in the current FR 5 agent where the Crash Protection settings would not correctly update if Quantity protection is enabled. A server restart should correct this issue.
If you do not wish to restart your server, you can try putting all the protection settings back to the defaults, save them. Then setting up Runtime protection first.
Hopefully this will solve your issue.
If you have any other problems I suggest you contact the FusionReactor support team at support#fusion-reactor.com.
Kind Regards,
Ben Donnelly
FusionReactor Support

Scheduled Tasks not running - Coldfusion Server Administration

I have a series of scheduled tasks that all run at various times of the day. Since the migration from Coldfusion version 7 to 10, these tasks have stopped running.
When I check the box, that outputs the results to a file, I get a text file that says nothing more than "Connection Failure". I have tried everything imaginable regarding the username and password for the task. It makes no difference. When I run the CFM page in my browser, the
page works correctly and generates an email just like it should. I just
can't make it run as a scheduled event.
Is the scheduled task folder has any check for the session or anything? I mean is the scheduled task folder is accessible without login? Please try with removing all the redirect rules for the application. That might work.
For me the requests were timing out. I increased the timeout in the administrator and that solved it. Doing a cfhttp in a test file and dumping the results helped me troubleshoot it.