AWS Lambda Edit Code Inline shows "Loading your function..." continuously - amazon-web-services

Edit Code Inline shows "Loading your function..." continuously and never actually loads the function.
And since function does not load, you obviously can not edit it.
I think there was some kind of update today (Nov 30, 2017) to Lambda because when you click on a function to edit it, there is a new section at the top that shows CONFIGURATION and says ADD TRIGGER which was not there yesterday.
And when you scroll down to the Function Code section, it just says "Loading your function..." and never does anything else.
Anybody else seeing this odd behavior?
EDIT: Using Firefox 57 on Windows 10. I get the following warnings/errors when I check Firefox console:
WARNINGS:
window.controllers/Controllers is deprecated. Do not use it for UA detection. ace.js:1:18479
Use of getPreventDefault() is deprecated. Use defaultPrevented instead. globalnav-fe3b9e5995ba8d342d395cb57493ce54ac2b40bb.gz.js:2:39229
window.controllers/Controllers is deprecated. Do not use it for UA detection. environment-default.js:5987
The ‘content’ attribute of Window objects is deprecated. Please use ‘window.top’ instead. home
ERRORS:
Unhandled promise rejection
DOMException { }
polyfill.js:4326:11
onUnhandled/https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:4326:11
[90]https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:1786:27
onUnhandled/<
https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:4320:16
[46]https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:993:25
https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:2154:7
run
https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:2140:5
listener
https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:2144:3
Unhandled promise rejection
DOMException { }
polyfill.js:4326:11
onUnhandled/https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:4326:11
[90]https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:1786:27
onUnhandled/<
https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:4320:16
[46]https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:993:25
https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:2154:7
run
https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:2140:5
listener
https://d3ifj4k507k5fs.cloudfront.net/ide-164cb54be56918ce7c55af08ee13c6339e8ebc5c/polyfill.js:2144:3

I was facing the same issue on Chrome 63.
What worked for me:
Right Click on the "Loading your function..."
Click on Reload Frame.
It should load the IDE.

What browser do you use? Chrome 62 and Firefox 57 fail at inline editing lambdas and both throw DOMExceptions, but Safari 11 seems to work. Try Safari(or some other browser) for editing while waiting for Amazon to fix this.

This is happening in chrome as chrome is disabling the use of cookies from cloudfront domain.
Go to address bar and there you would see a icon that says there are some cookies blocked on this website.
Click on it.
Click on manage.
Click on Blocked.
Click on allow to couldfront domain.
Reload page.
I have tried it and it's working for me.
Edit: This could also be tried on other browsers that are facing this issue.

This happened to me using the 64-bit version of Google Chrome 75.0.3770.142 on Windows 10 circa July 31st 2019. To resolve, I trashed all browser data e.g., cookies, etc. Steps to remove that data can be found here.
Things that did not work for me...
Refresh frame.
Refresh tab.
Close and re-open browser.
Restart computer.
Cry quietly to myself and question my decisions in life.
The Lambda editor in the console is AWS Cloud9, which needs some stuff (e.g., cookies) that is spelled out here.

None of these worked for me.
Try clearing all aws cookies. That did work.

I found that it was FireFox's content blocking causing the problem - click the shield next to "HTTPS Secure" lock on the left hand end of the address bar and click "Turn off blocking for this site".
Request to access cookie or storage on “https://….cloudfront.net/ide-…/modules/#c9/ide/plugins/c9.ide.language.core/worker.js” was blocked because we are blocking all third-party storage access requests and content blocking is enabled.home
Blocking on (editor not working):
Blocking off (editor not working):

Try using a different browser. Worked for me.

Related

CloudKit Dashboard: Deploy Schema to Production fails with "There was a problem loading the environment’s status"

Note: This is not new, but I have some new insights on it.
For about three weeks now I regularly try to deploy the development-schema of my CloudKit Container to production, using the CloudKit Dashboard:
It spins for exactly a minute to then tell me "There was a problem loading the environment's status"
This is not new, many other questions face this as well:
Error CloudKit Dashboard - There was a problem loading the environment's status
Does iCloud need to be in the Production environment in order to use in Production?
iCloud dashboard: Cannot deploy CloudKit schema to Production
Apple support told me to
look at https://developer.apple.com/forums/thread/656723 (try again after a day with stable network)
use Safari and resetting browser settings to clear cache and cookies
"You may also try creating a new CloudKit container, rebuilding your schema, and then try again." => obviously doesn't work, because users have data on production
TL;DR:
Kill the timeout by running this in the console:
var id = window.setTimeout(function() {}, 0);
while (id--) {
window.clearTimeout(id); // will do nothing if no timeout with id is present
}
(the response is undefined — that's okay)
How I got there
So I started to look at the requests the site makes to the backend when I click "deploy". Chrome shows that the request to
https://p39-ckdatabasews.icloud.apple.com/r/v3/user/<container-name>/production/public/admin/deployment/status?team_id=<team-id>
is cancelled after 1.0 min.
Insight 1
The problem is with the production schema. I had used the Reset Development Environment before to make sure I hadn't messed that up myself, but this would have spared me that.
I used the Copy as cURL command (in Chrome, because it also copies the auth cookies, which Safari does not) and ran it in Terminal.
Interestingly, that does respond after 1'37 min. That's also what the X-Apple-Edge-Response-Time: 97244 header says.
If you know what to look for, the console will also tell you the the request timed out:
Insight 2
The server takes too long to respond (> 1min) and the client script times out (at 1 min)
Note: You can also get a response by right-clicking the request in Chrome and choosing "Replay XHR".
Solution
I tried to understand the JavaScript that sends the XHR request and modify the timeout, but I failed. However, you can apparently clear all timeouts that exist with
var id = window.setTimeout(function() {}, 0);
while (id--) {
window.clearTimeout(id); // will do nothing if no timeout with id is present
}
(from https://stackoverflow.com/a/8860203)
Running that while waiting for the response actually worked for me!

npm:youtube-dl and Lamda HTTP Error 429: Too Many Requests

I am running an npm package: youtube-dl through a Lambda function as I want to create an online convertor.
I have suddenly started to run into the following error message:
{
"errorMessage": "Command failed: /var/task/node_modules/youtube-dl/bin/youtube-dl --dump-json --format=best[ext=mp4] https://www.youtube.com/watch?v=MfTbHITdhEI\nERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n",
"errorType": "Error",
"stackTrace": ["ERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.", "", "ChildProcess.exithandler (child_process.js:275:12)", "emitTwo (events.js:126:13)", "ChildProcess.emit (events.js:214:7)", "maybeClose (internal/child_process.js:925:16)", "Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)"]
}
Edit: I have run this a few times when I was testing the other day, but today I only ran it once.
I think that the IP address used by my Lambda function has now been blacklisted. I'm unsure how to proceed as I am a junior and very new to all this.
Is there a way to resolve this? Can I get a new IP address? Is this going to be super costly?
youtube-dl lack of delay (limit of request per time) option.
(see suggestion it the bottom of my post).
NEVER download more than one video with youtube-dl.
You can search youtube-dl author contact (e-mail etc) and write them directly, also as open issue on github page regarding it. as more request they have as fast they be pleased to fix it.
Currenty they have planty same request on this issue in gitlab but they hardly to block discussions and close tickets by this problem.
This is some sort of misbehaviour I believe.
I also found that developer suggest to use proxy instead of introducing delay option in his code - extremely funny.
OK, re to use proxy - but this actually does not solve the problem since it is lack of program design and no matter you use proxy or not YouTube limits is still here.
Please note:
This cause not only subj error but blocking your IP by YouTube.
Once you hit this situation YouTube will block your IP as a suspicious again and again even with a small requests amount. this cause tremendous problems since IP marked as suspicious.
Without limiting request per time option (with safe value by default) I consider youtube-dl as a dangerous software should cause problems and I stopped using it until this option will be introduced.
RECOMENDATIONS:
Use Ctrl+S (suspend) , Ctrl+Q (resume) when youtube-dl collecting digest for many videos (when you already donloaded many videos of channel but new one still there). I suspend it for a few minutes after eatch 10.
And use --limit-rate 150K (or as low as it sane), this may help you to not hit the limit since whole transmission is shaped.
Ok, so I found this response: https://stackoverflow.com/a/45339683/9793169
I am wondering if it's possible that our because our volume is low we just always end up using the same container hence the same IP address?
Yes, that is exactly the reason. A container is only spawned if no containers are already available. After a few minutes of no further demand, excess/unneeded containers are destroyed.
If so is there any way to prevent this?
No, this behavior is by design.
SOLUTION:
I logged out for 20 minutes and went back to the function and ran it again. It worked
Not my solution, it took me a while to understand what he ment (reading is an art). It worked for me.
(see: https://askubuntu.com/questions/1220266/youtube-dl-do-not-working-http-error-429-too-many-requests-how-can-i-solve-this)
You have to use the option --cookies in combination with a current/correct cookie file.
Here the steps I followed
1. if you use Firefox, install addon cookies.txt, enable the addon
2. clear your browser cache, clear you browser cookies (privacy reasons)
3. go to google.com, and log in with your google account
4. go to youtube.com
5. click on the cookies.txt addon, and export the cookies, save it as cookies.txt (in the same directory from where you are going to run youtube-dl)
6. this worked for me ... youtube-dl --cookies cookies.txt https://www.youtube.com/watch?v=....
Hope it helps.
use --force-ipv4 option in command.
youTube-dl --force-ipv4 ...
What you should do is handle that error by retrying the requests that are throttled.

Google Cloud Dataprep: Transformation engine unavailable due to prior crash (exit code: -1)

I am trying to create a flow using Google Cloud Dataprep. The flow takes a data set from Big Query which contains app events data from Firebase Analytics to flatten event parameters for easier analysis. I keep getting the following error before even being able to create the first step (recipe):
Transformation engine unavailable due to prior crash (exit code: -1)
See top right corner in the screenshot below
Screenshot
The error message you received is particularly challenging in that it
is so generic. The root cause could be within the platform, or it
could be in whatever execution environment you used for the job.
Unfortunately, we don't have the resources right now to capture and
document all of the error messages that can be emitted during the job
execution process, which can span a wide variety of servers and other
software platforms.
I encountered the same problem. First I tried following steps:
Refresh the browser (i.e., click the Reload button top left)
"Hard refresh" the browser (i.e., ctrl + Reload)
Clear cache + cookies (i.e., https://support.google.com/accounts/answer/9098093?co=GENIE.Platform=Desktop&hl=en&visit_id=636802035537591679-2642248633&rd=1)
References:
https://community.trifacta.com/s/question/0D51L00005dG3MXSA0/i-was-working-on-a-recipe-and-i-received-the-error-message-transformation-engine-unavailable-due-x-to-prior-crash-exit-code-1-why-am-i-getting-this-error
https://community.trifacta.com/s/question/0D51L00005choIbSAI/unable-to-develop-on-our-trifacta-42-platform-for-the-past-12-hours-steps-added-to-recipes-are-lost-and-having-to-recode-the-error-given-is-transformation-engine-unavailable-what-is-causing-this-error
However this did not solve the problem. Then I tried:
Confirm that your Chrome version is 68+. If not, please upgrade.
Navigate to chrome://nacl/ and ensure that PNaCl is enabled.
Navigate to chrome://components/ and ensure that the PNaCl Version is not 0.0.0.0. Click on Check for Updates
Did not solve the problem either.
References:
https://community.trifacta.com/s/question/0D51L00005dDrcmSAC/not-able-to-preview-data-sources-or-edit-recipes
I got the info from Trifacta, that there has been an internal issue after maintenance. So if non of the above solutions work, you just have to wait and see when they fix the problem.

Tensorboard on Windows: 404 _traceDataUrl error

On Windows when I execute:
c:\python35\scripts\tensorboard --logdir=C:\Users\Kevin\Documents\dev\Deadpool\Tensorflow-SegNet\logs
and I web browse to http://localhost:6006 the first time I am redirected to http://localhost:6006/[[_traceDataUrl]] and I get the command prompt messages:
W0913 14:32:25.401402 Reloader tf_logging.py:86] Found more than one graph event per run, or there was a metagraph containing a graph_def, as well as one or more graph events. Overwriting the graph with the newest event.
W0913 14:32:25.417002 Reloader tf_logging.py:86] Found more than one metagraph event per run. Overwriting the metagraph with the newest event.
W0913 14:32:36.446222 Thread-2 application.py:241] path /[[_traceDataUrl]] not found, sending 404
When I try http://localhost:6006 again, TensorBoard takes a long time presents the 404 message again but this time displays a blank web page.
Logs directory:
checkpoint
events.out.tfevents.1504911606.LTIIP82
events.out.tfevents.1504912739.LTIIP82
model.ckpt-194000.data-00000-of-00001
model.ckpt-194000.index
model.ckpt-194000.meta
Why am I getting redirected and 404ed?
i'am having the exact same error. Maybe it is because of this issue. So try to Change the env-variable to --logdir=foo:C:\Users\Kevin\Documents\dev\Deadpool\Tensorflow-SegNet\logs.
Hope it helps.
Could it be that you try to access the webpage with IE? Apparently IE is not supported by Tensorboard yet(https://github.com/tensorflow/tensorflow/issues/9372). Maybe use another Browser.
I encountered the same error before, and found that it was due to internet setting problem.
On Internet Explorer, Go to Tools -> Internet Options -> Connections, click LAN settings, and then click Automatic detect settings.

sitecore session time-out or server failure on publish or browse for package to install

I am at my wits end on this and can't figure this out. In sitecore v6.2 something has changed that is causing an error message as follows
"The operation could not be completed. Your session may have been lost due to a time-out or server failure".
looks like this is coming from Sitecore.Web.UI.Sheer.ClientPage?
The request info:
https://sitecore.test.domain.com/sitecore/shell/sitecore/content/Applications/Content%20Editor.aspx?ic=People%2f16x16%2fcubes_blue.png
the response:
{"commands":[{"command":"Alert","value":"The operation could not be completed.\n\nYour session may have been lost\ndue to a time-out or a server failure.\n\nTry again."}]}
At first, I assumed it was because plugged in some new HttpModules so I moved them into the sitecore pipeline model and the problem kept persisting. I removed them from the entire application and the problem kept persisting.
A google search on the error gets me to some information on the keepalive.aspx stuff, but addressing that has no bearing.
I decompiled the code with reflector, but can't find anywhere this particular error is raised. It must be in sitecore.nexus or something.
According to my superiors we will open a ticket once we get the build resolved, but here's to hoping someone here has some suggestions.
The constant for this error message is THE_OPERATION_COULD_NOT_BE_COMPLETED_YOUR_SESSION_MAY_HAVE_BEEN_LOSTDUE_TO_A_TIMEOUT_OR_A_SERVER_FAILURE_PLEASE_TRY_AGAIN
This might happened if you server restarts while some dialog opened