Coldfusion Builder check for updates not working - coldfusion

Running CF builder 3, Help > Check for Updates throws an error -- "Unable to read repository at http://ahtik.com/eclipse-update/content.xml.
peer not authenticated". Not sure why it's looking there, but I pasted the url in browser and indeed got a 404 error. Is there a way to change where CFB looks for this update check?

Okay, I was getting the "No repository found at file:/Y:/.." message because the path was listed under window > preferences > Install/Update > Available Software Sites. Y was a network drive that we disabled, but beyond that, it looks like my CFB was incorrectly configured for updates anyway. I spent some time trying to add in the standalone patch at the top of https://helpx.adobe.com/coldfusion/kb/coldfusion-builder-3-updates.html, but kept getting error messages. I inherited this copy of CFB a year ago, so I just did a clean reinstall. After that did Help > Check for Updates, it found and installed an update, restarted CFB, and now everything's good.

Related

npm:youtube-dl and Lamda HTTP Error 429: Too Many Requests

I am running an npm package: youtube-dl through a Lambda function as I want to create an online convertor.
I have suddenly started to run into the following error message:
{
"errorMessage": "Command failed: /var/task/node_modules/youtube-dl/bin/youtube-dl --dump-json --format=best[ext=mp4] https://www.youtube.com/watch?v=MfTbHITdhEI\nERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n",
"errorType": "Error",
"stackTrace": ["ERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.", "", "ChildProcess.exithandler (child_process.js:275:12)", "emitTwo (events.js:126:13)", "ChildProcess.emit (events.js:214:7)", "maybeClose (internal/child_process.js:925:16)", "Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)"]
}
Edit: I have run this a few times when I was testing the other day, but today I only ran it once.
I think that the IP address used by my Lambda function has now been blacklisted. I'm unsure how to proceed as I am a junior and very new to all this.
Is there a way to resolve this? Can I get a new IP address? Is this going to be super costly?
youtube-dl lack of delay (limit of request per time) option.
(see suggestion it the bottom of my post).
NEVER download more than one video with youtube-dl.
You can search youtube-dl author contact (e-mail etc) and write them directly, also as open issue on github page regarding it. as more request they have as fast they be pleased to fix it.
Currenty they have planty same request on this issue in gitlab but they hardly to block discussions and close tickets by this problem.
This is some sort of misbehaviour I believe.
I also found that developer suggest to use proxy instead of introducing delay option in his code - extremely funny.
OK, re to use proxy - but this actually does not solve the problem since it is lack of program design and no matter you use proxy or not YouTube limits is still here.
Please note:
This cause not only subj error but blocking your IP by YouTube.
Once you hit this situation YouTube will block your IP as a suspicious again and again even with a small requests amount. this cause tremendous problems since IP marked as suspicious.
Without limiting request per time option (with safe value by default) I consider youtube-dl as a dangerous software should cause problems and I stopped using it until this option will be introduced.
RECOMENDATIONS:
Use Ctrl+S (suspend) , Ctrl+Q (resume) when youtube-dl collecting digest for many videos (when you already donloaded many videos of channel but new one still there). I suspend it for a few minutes after eatch 10.
And use --limit-rate 150K (or as low as it sane), this may help you to not hit the limit since whole transmission is shaped.
Ok, so I found this response: https://stackoverflow.com/a/45339683/9793169
I am wondering if it's possible that our because our volume is low we just always end up using the same container hence the same IP address?
Yes, that is exactly the reason. A container is only spawned if no containers are already available. After a few minutes of no further demand, excess/unneeded containers are destroyed.
If so is there any way to prevent this?
No, this behavior is by design.
SOLUTION:
I logged out for 20 minutes and went back to the function and ran it again. It worked
Not my solution, it took me a while to understand what he ment (reading is an art). It worked for me.
(see: https://askubuntu.com/questions/1220266/youtube-dl-do-not-working-http-error-429-too-many-requests-how-can-i-solve-this)
You have to use the option --cookies in combination with a current/correct cookie file.
Here the steps I followed
1. if you use Firefox, install addon cookies.txt, enable the addon
2. clear your browser cache, clear you browser cookies (privacy reasons)
3. go to google.com, and log in with your google account
4. go to youtube.com
5. click on the cookies.txt addon, and export the cookies, save it as cookies.txt (in the same directory from where you are going to run youtube-dl)
6. this worked for me ... youtube-dl --cookies cookies.txt https://www.youtube.com/watch?v=....
Hope it helps.
use --force-ipv4 option in command.
youTube-dl --force-ipv4 ...
What you should do is handle that error by retrying the requests that are throttled.

Google Cloud Dataprep: Transformation engine unavailable due to prior crash (exit code: -1)

I am trying to create a flow using Google Cloud Dataprep. The flow takes a data set from Big Query which contains app events data from Firebase Analytics to flatten event parameters for easier analysis. I keep getting the following error before even being able to create the first step (recipe):
Transformation engine unavailable due to prior crash (exit code: -1)
See top right corner in the screenshot below
Screenshot
The error message you received is particularly challenging in that it
is so generic. The root cause could be within the platform, or it
could be in whatever execution environment you used for the job.
Unfortunately, we don't have the resources right now to capture and
document all of the error messages that can be emitted during the job
execution process, which can span a wide variety of servers and other
software platforms.
I encountered the same problem. First I tried following steps:
Refresh the browser (i.e., click the Reload button top left)
"Hard refresh" the browser (i.e., ctrl + Reload)
Clear cache + cookies (i.e., https://support.google.com/accounts/answer/9098093?co=GENIE.Platform=Desktop&hl=en&visit_id=636802035537591679-2642248633&rd=1)
References:
https://community.trifacta.com/s/question/0D51L00005dG3MXSA0/i-was-working-on-a-recipe-and-i-received-the-error-message-transformation-engine-unavailable-due-x-to-prior-crash-exit-code-1-why-am-i-getting-this-error
https://community.trifacta.com/s/question/0D51L00005choIbSAI/unable-to-develop-on-our-trifacta-42-platform-for-the-past-12-hours-steps-added-to-recipes-are-lost-and-having-to-recode-the-error-given-is-transformation-engine-unavailable-what-is-causing-this-error
However this did not solve the problem. Then I tried:
Confirm that your Chrome version is 68+. If not, please upgrade.
Navigate to chrome://nacl/ and ensure that PNaCl is enabled.
Navigate to chrome://components/ and ensure that the PNaCl Version is not 0.0.0.0. Click on Check for Updates
Did not solve the problem either.
References:
https://community.trifacta.com/s/question/0D51L00005dDrcmSAC/not-able-to-preview-data-sources-or-edit-recipes
I got the info from Trifacta, that there has been an internal issue after maintenance. So if non of the above solutions work, you just have to wait and see when they fix the problem.

Tensorboard on Windows: 404 _traceDataUrl error

On Windows when I execute:
c:\python35\scripts\tensorboard --logdir=C:\Users\Kevin\Documents\dev\Deadpool\Tensorflow-SegNet\logs
and I web browse to http://localhost:6006 the first time I am redirected to http://localhost:6006/[[_traceDataUrl]] and I get the command prompt messages:
W0913 14:32:25.401402 Reloader tf_logging.py:86] Found more than one graph event per run, or there was a metagraph containing a graph_def, as well as one or more graph events. Overwriting the graph with the newest event.
W0913 14:32:25.417002 Reloader tf_logging.py:86] Found more than one metagraph event per run. Overwriting the metagraph with the newest event.
W0913 14:32:36.446222 Thread-2 application.py:241] path /[[_traceDataUrl]] not found, sending 404
When I try http://localhost:6006 again, TensorBoard takes a long time presents the 404 message again but this time displays a blank web page.
Logs directory:
checkpoint
events.out.tfevents.1504911606.LTIIP82
events.out.tfevents.1504912739.LTIIP82
model.ckpt-194000.data-00000-of-00001
model.ckpt-194000.index
model.ckpt-194000.meta
Why am I getting redirected and 404ed?
i'am having the exact same error. Maybe it is because of this issue. So try to Change the env-variable to --logdir=foo:C:\Users\Kevin\Documents\dev\Deadpool\Tensorflow-SegNet\logs.
Hope it helps.
Could it be that you try to access the webpage with IE? Apparently IE is not supported by Tensorboard yet(https://github.com/tensorflow/tensorflow/issues/9372). Maybe use another Browser.
I encountered the same error before, and found that it was due to internet setting problem.
On Internet Explorer, Go to Tools -> Internet Options -> Connections, click LAN settings, and then click Automatic detect settings.

NATS Error while developing echo service

I'm trying to develop a system service, so I use the echo service as a test.
I developed the service by following the directions on the CF doc.
Now the echo node can be running, but the echo gateway failed with the error "echo_gateway - pid=15040 tid=9321 fid=290e ERROR -- Exiting due to NATS error: Could not connect to server on nats://localhost:4222/"
I got into this issue and struck for almost a week finally someone helped me to resolve it. The underlying problemn is something else and since errors are not trapped properly it gives a wrong message. You need to goto github and get the latest code base. The fix for this issue is http://reviews.cloudfoundry.org/#/c/8891 . Once you fix this issue, you will most likely encounter a timeout field issue. the solution for that is to define the timeout field gateway.yml
A few additional properties became required in the echo_gateway.yml.erb file - specifically, the latest were default_plan and timeout, under the service group. The properties have been added to the appropriate file in the vcap-services-sample-release repo.
Looks like the fix for the misleading error has been merged into github. I haven't updated and verified this myself just yet but the gerrit comments indicate the solution is the same as what the node base has had for some time. I did previously run into that error handling and it was far more helpful.

sitecore session time-out or server failure on publish or browse for package to install

I am at my wits end on this and can't figure this out. In sitecore v6.2 something has changed that is causing an error message as follows
"The operation could not be completed. Your session may have been lost due to a time-out or server failure".
looks like this is coming from Sitecore.Web.UI.Sheer.ClientPage?
The request info:
https://sitecore.test.domain.com/sitecore/shell/sitecore/content/Applications/Content%20Editor.aspx?ic=People%2f16x16%2fcubes_blue.png
the response:
{"commands":[{"command":"Alert","value":"The operation could not be completed.\n\nYour session may have been lost\ndue to a time-out or a server failure.\n\nTry again."}]}
At first, I assumed it was because plugged in some new HttpModules so I moved them into the sitecore pipeline model and the problem kept persisting. I removed them from the entire application and the problem kept persisting.
A google search on the error gets me to some information on the keepalive.aspx stuff, but addressing that has no bearing.
I decompiled the code with reflector, but can't find anywhere this particular error is raised. It must be in sitecore.nexus or something.
According to my superiors we will open a ticket once we get the build resolved, but here's to hoping someone here has some suggestions.
The constant for this error message is THE_OPERATION_COULD_NOT_BE_COMPLETED_YOUR_SESSION_MAY_HAVE_BEEN_LOSTDUE_TO_A_TIMEOUT_OR_A_SERVER_FAILURE_PLEASE_TRY_AGAIN
This might happened if you server restarts while some dialog opened