Slack integration for error reporting suddenly stopped working. Anyone experienced similar issues?
We also removed the slack channel integration and redid it. Sadly no improvement. Also worth noting is, that other channels work as expected
Issue has been resolved in the mean time. Thanks to Kyle for looking into this!
Related
I have a site running on AWS, the spec of free tier. It has been running almost more than 10 months without any trouble, but I found lately it has the 502 gateway, not be reached.
My question here is I haven't touched the settings or any, I suddenly got this. Wondered what caused it or what's happened. Is this a common issue? How can I avoid it?
Thanks in advance.
I'm am new to server technology and dont really understand how they work (hope you could shed some light on this as well).
But basically my problem is I have a Firebase Database which i need to update every 20 seconds the whole day. This is the way I think i should solve the problem. I need to send a HTTP POST request to the firebase database every 20 sec. This means I need to have a server where I run a piece of code sending the HTTP request every 20s. Im not sure if this is the right way to do it, and even if it is how to implement it.
Some questions i have are
I definitely need to create a server for this right? and if so what platform is recommended to write my server code? (preferable free platforms)
I have tried reading up on the platforms available such as AWS, Google Cloud but dont really get the terminology used. Are there any tutorials for this available?
I am really lost, and have been stuck on this for some time, any help is deeply appreciated.
This is achievable by leveraging Cloud Watch Events specifically using a Rate Expression that invokes a SNS topic which can then hit your HTTP endpoint.
Hope that helps!
I would suggest that you try and keep everything within Firebase. Create a firebase cloud function that sends the HTTP request for the update, and use Firebase functions-cron that is a cron like scheduler to schedule.
Working on the commercial paper example as outlined here:
https://github.com/IBM-Blockchain/cp-web
Works fine until I try to log into my project here:
http://cp-web-jklondon-1411.mybluemix.net/login
Get error saying:
Waiting on the node server to open up so we can talk to the blockchain. This app is likely still starting up. Check the server logs if this message does not go away in 1 minute.
This application cannot run without the blockchain network :(
Whats going on?
Thanks
Rav
Bluemix might have been unstable / the app is still staging. This actually happened quite frequently.
I'm building an app called Tuftslife, and when running locally it works fine, but when I put it on AWS, it works for a little, outputs a lot of errors (like those below) and then crashes a little while later and only returns 500s. Our theory is that these requests are never timing out and overwhelming the server.
We tried turning off socket.io using this gist but it doesn't seem to have worked. What are we doing wrong?
Deleting the entire file that does socket.io on the frontend ended up muting our issue. I don't know enough about these things to diagnose exactly what we were doing. Oh well, works now.
An exception occurred when setting up mail server parameters. This
exception was caused by:
coldfusion.mail.MailSpooler$SpoolLockTimeoutException: A timeout
occurred while waiting for the lock on the mail spool directory..
Recently i started to get this nasty exception in my mail.log file. Once this exception shows up, every mail that is sent from that coldfusion instance throws the same exception.
The only thing that seems to work is to restart the coldfusion server. After (usually) a day or two the same exception pops up again and we're back in the same situation.
I am aware of the hotfix to control the mailspool timeout but all it does is increase the timeout from 30 to 60 seconds. Since the mails are sent successfully until i get the exception, i don't think this is my solution.
Also i read the thread in the adobe forum where people have installed the hotfix, but still get the error.
I also tried a script to restart only the mailservice when this exception showed up, but this didn't work for me, as it didn't for others with this problem. This would also not be a concrete solution.
The mails that i send arre simple html mails.
The number of mails sent spreaded over a day is not more then 30.
I've sent mails from
the exact same coldfusion server many times before, but with
<cfmail>. This is the first time i'm sending them in cfscript. I
don't know if this has anything to do with it, but it's only since
i'm using the cfscript equivalent of <cfmail> that i started to get
this exception.
All related blog posts that i could find are all unanswered but also pretty old. I thought that someone might have a solution by now.
Thanks.
(using coldfusion 9.0.1 server on windows 2008 server)
We were also experiencing this mail spool lock issue. After the issue occurred a fourth time in 2 months, we started reviewing these forums and found no solution.
This made me think that perhaps the solution and problem are not really CF at all, so I went into the server's virus protection and excluded the CF mail spool directory so that the virus protection does not touch the spool directory at all. So far, we have not had the problem again.
So I am not sure that this is the permanent fix, but it has worked so far for us. No outside entities create emails within our systems, so the directory should be relatively safe but not having email-outs work is not an option.
this chain from talkingtree might give some light:
http://www.talkingtree.com/blog/index.cfm?mode=entry&entry=67FD4A34-50DA-0559-A042BCA588B4C15B
what they are saying is that it could be an issue with disk activity taking to long. you can increase the mail spool timeout with the jvm argument: -Dcoldfusion.spooltimeout=120
oh.... one more thing. if you're using cfmail to email dumps when an error occurrs, make sure to add 'format="text"' to the cfdump tags. some of the emails can get pretty big and might be causing the error.